https://engineering.jhu.edu/ams
America/New_York
America/New_York
America/New_York
20181104T020000
-0400
-0500
EST
20180311T020000
-0500
-0400
EDT
ai1ec-5851@engineering.jhu.edu/ams
20180716T042154Z
Orientation
Orientation for new MSE in Financial Mathematics students will be held August 12 – August 25.
A full orientation schedule for 2015 will be available shortly before the start of orientation.
20150812
20150813
0
Financial Mathematics Orientation Begins
Financial Mathematics
ai1ec-5853@engineering.jhu.edu/ams
20180716T042154Z
20150827
20150828
0
Fall Classes Begin
current students
ai1ec-5852@engineering.jhu.edu/ams
20180716T042154Z
PhD
20150819T083000
20150819T170000
Whitehead 304
0
PhD Introductory Exam
Intro Exam,Orientation,PhD
ai1ec-5976@engineering.jhu.edu/ams
20180716T042154Z
20150826T143000
20150826T163000
0
Teaching Assistant Orientation: Meet in Hodson 316
ai1ec-5974@engineering.jhu.edu/ams
20180716T042154Z
20150827T133000
20150827T143000
0
Seminar: Get to Know You @ Whitehead 304
ai1ec-5959@engineering.jhu.edu/ams
20180716T042154Z
A GENERAL THEORY FOR COMPUTING ATTRACTIVE REPRESENTATIONS ON NONCONVEX OPTIMIZATION PROBLEMS
More than one mathematical representation can accurately depict a decision problem. Success in obtaining optimal solutions, however, often depends upon the formulation selected. Since challenging nonconvex optimization problems are typically solved by using linear programming relaxations as tools to compute bounds for eliminating inferior solutions, “attractive” representations tend to be characterized by the accuracy of their relaxations. This importance of relaxation strength is well documented within the Operations Research literature, where numerous authors have suggested methods for acquiring strength. The posed methods are often problem dependent, relying on the exploitation of specific structures.
This talk presents a general theory for deriving representations with tight relaxations. The fundamental idea is to recast a given problem into higher-dimensional spaces by automatically generating auxiliary variables and constraints. Strength is garnered via suitable mathematical identities. The talk begins with an introduction to the importance of relaxation strength, and then highlights contributions and challenges relative to the progressively more general families of mixed-binary, mixed-discrete, and general nonconvex programs. Ongoing research is discussed.
20150903T133000
20150903T143000
0
Seminar: Warren Adams (Clemson University) @ Whitehead 304
ai1ec-5941@engineering.jhu.edu/ams
20180716T042154Z
SUBGROUP-BASED ADAPTIVE (SUBA) ENRICHMENT DESIGNS FOR MULT-ARM BIOMARKER TRIALS
Targeted therapies based on biomarker profiling are becoming a mainstream direction of cancer research and treatment. Depending on the expression of specific prognostic biomarkers, targeted therapies assign different cancer drugs to subgroups of patients even if they are diagnosed with the same type of cancer by traditional means, such as tumor location. For example, Herceptin is only indicated for the subgroup of patients with HER2 breast cancer, but not other types of breast cancer. However, subgroups like HER2 breast cancer with effective targeted therapies are rare and most cancer drugs are still being applied to large patient populations that include many patients who might not respond or benefit. Also, the response to targeted agents in humans is usually unpredictable. To address these issues, we propose SUBA, subgroup-based adaptive designs that simultaneously search for prognostic subgroups and allocate patients adaptively to the best subgroup-specific treatments throughout the course of the trial. The main features of SUBA include the continuous reclassification of patient subgroups based on a random partition model and the adaptive allocation of patients to the best treatment arm based on posterior predictive probabilities. We compare the SUBA design with three alternative designs including equal randomization, outcome-adaptive randomization and a design based on a probit regression. In simulation studies we find that SUBA compares favorably against the alternatives.
20150910T133000
20150910T143000
0
Seminar: Yanxun Xu (Johns Hopkins University) @ Whitehead 304
ai1ec-5983@engineering.jhu.edu/ams
20180716T042154Z
THE GENERAL SETTING FOR SHAPE DEFORMATION ANALYSIS
I will define a unified setting for shape registration and LDDMM methods for shape analysis, using optimal control theory, and give the Hamiltonian geodesic equations associated to a smooth enough reproducing kernel. I will then give several applications of this framework, such as fibered shapes (for muscles), and the addition of constraints for the simultaneous study of multiple interacting shapes.
20150917T133000
20150917T143000
0
Seminar: Sylvain Arguillere (Johns Hopkins University) @ Whitehead 304
ai1ec-5855@engineering.jhu.edu/ams
20180716T042154Z
WSE
20150918T120000
20150918T140000
0
WSE Annual Fall Picnic
current students,faculty,staff
ai1ec-5938@engineering.jhu.edu/ams
20180716T042154Z
CHALLENGES IN GRAPH-BASED MACHINE LEARNING AND ROBUSTIFYING DATA GRAPHS WITH SCALABLE LOCAL SPECTRAL METHODS
Graphs are very popular ways to model data in many data analysis and machine learning applications, but they can be quite challenging to work with, especially when they are very sparse, as is typically the case. We will discuss challenges we have encountered in working with large sparse graphs in machine learning and data analysis applications and in particular in the construction of these graphs, e.g., with various sorts of popular nearest neighbor rules applied to feature vectors. In our experience, many properties of the constructed graphs are very sensitive to seemingly-minor and often-ignored aspects of the graph construction process. This should suggest caution in using popular algorithmic and statistical tools, e.g., popular nonlinear dimensionality reduction methods, in trying to extract insight from those constructed graphs. We will also describe recent results on using local spectral methods to robustify this graph construction process. Local spectral methods use locally-biased random walks, they have had several remarkable successes in worst-case algorithm design as well as in analyzing the empirical properties of large social and information networks, and they are an example of a worst-case approximation algorithm that implicitly but exactly implements a form of statistical regularization. Informally, the reason for the successes of these methods in robustifying graph construction is that these local random walks provide a regularized or stable version of an eigenvector, and initial results on using these ideas to robustify the graph construction process are promising.
20150924T133000
20150924T143000
0
Seminar: Michael Mahoney (University of California, Berkeley) @ Whitehead 304
ai1ec-5702@engineering.jhu.edu/ams
20180716T042154Z
Seminar
GAUGE DUALITY AND LOW-RANK SPECTRAL OPTIMIZATION
Gauge functions significantly generalize the notion of a norm, and
gauge optimization is the class of problems for finding the element of
a convex set that is minimal with respect to a gauge. These
conceptually simple problems appear in a remarkable array of
applications. Their structure allows for a special kind of duality
framework that can lead to new algorithmic approaches to challenging
problems. Low-rank spectral optimization problems that arise in two
signal-recovery application, phase retrieval and blind deconvolution,
illustrate the benefits of the approach.
20151001T133000
20151001T143000
0
Seminar: Michael Friedlander (University of California, Davis) @ Whitehead 304
ai1ec-5937@engineering.jhu.edu/ams
20180716T042154Z
Seminar
Change Point Inference for Time-varying Erdos-Renyi Graphs
We investigate a model of an Erdos-Renyi graph, where the edges can be in a present/absent state. The states of each edge evolve as a Markov chain independently of the other edges, and whose parameters exhibit a change-point behavior in time. We derive the maximum likelihood estimator for the change-point and characterize its distribution. Depending on a measure of the signal-to-noise ratio present in the data, different limiting regimes emerge. Nevertheless, a unifying adaptive scheme can be used in practice that covers all cases.We illustrate the model and its flexibility on US Congress voting patterns using roll call data.
20151008T133000
20151008T143000
0
Seminar: George Michailidis (University of Florida) @ Whitehead 304
ai1ec-6073@engineering.jhu.edu/ams
20180716T042154Z
HUSAM
Lies, Deceit, and Misrepresentation: The Distortion of Statistics in America
H.G. Wells once said “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.” The widespread use of statistics plays an influential role in persuading public opinion. As such, statistical literacy is necessary for members of society to critically evaluate the bombardment of charts, polls, graphs, and data that are presented on a daily basis. However, what often passes for “statistical” calculations and discoveries need to be taken with a grain of salt. This talk will examine the applications of statistics in American media and give examples of where statistics has been grossly misused.
The talk will begin at 7pm in Hodson 110, with refreshments being served at 6:30. A flyer for the event is attached and a link to RSVP on the Facebook page is here: https://www.facebook.com/events/959982947374497/.
20151021T183000
Hodson 110
0
HUSAM: Dr. Talitha Williams, Harvey Mudd College
HUSAM
1
ai1ec-5989@engineering.jhu.edu/ams
20180716T042154Z
Title: Information Theoretic Intuitions about Some Estimation Problems in Speech Recognition
Abstract:
Automatic speech recognition (ASR) systems compose probabilistic models of numerous kinds to transcribe a spoken utterance into a sequence of written words. The so called acoustic model aims to compute the probability of each sound category label, such as a phoneme or an allophone, given a short segment of speech. The parameters of the acoustic model are typically estimated to maximize discrimination between the correct and incorrect sound categories on some labeled “training” samples—specifically to maximize a mutual information. The inventory of sound categories (allophones) the acoustic model is trained to discriminate is also determined from data. This is typically done using decision trees to recursively divide all acoustic samples of a phoneme, based on the phonetic context of the sample, into maximally homogeneous subsets—specifically, subsets that minimize a conditional entropy.
Two recent advances, one each in acoustic model estimation and in the creation of phonetic decision trees, will be described, beginning with the information theoretic intuitions behind the changes we made to currently used methods. The first replaces the maximization of mutual information with minimization of a related conditional entropy, which turns out to be advantageous for semi-supervised training of acoustic models, i.e. when some samples have missing labels. The second investigates an alternative to random forests by developing multiple decisions trees in a deterministic manner; it maximizes diversity by minimizing mutual information between the leaves assigned by the multiple trees to each training sample.
The presentation will have a tutorial flavor for a non-speech-technology audience, and will not present any new results in mathematics or statistics; just some mathematical intuitions about estimation techniques that appear to improve speech recognition performance on benchmark data sets.
Biosketch:
Sanjeev Khudanpur (PhD 1997, Electrical Engineering, University of Maryland) is an Associate Professor in the Departments of Electrical and Computer Engineering and of Computer Science, the Acting Director of the Center for Language and Speech Processing, and a founding affiliate of the Human Language Technology Center of Excellence, all in The Johns Hopkins University. His interests are in the application of statistical methods to speech and text processing, and to other engineering problems involving time-series data. His office is in Hackerman Hall, the Homewood campus building with the surgical robots!
20151022T133000
20151022T143000
0
Seminar: Sanjeev Khudanpur (Johns Hopkins University) @ Whitehead 304
ai1ec-5960@engineering.jhu.edu/ams
20180716T042154Z
Recent Results on Polynomial Optimization Problems
Polynomial optimization problems, as the name suggests, are optimization problem where the objective function as well as the constraints are described by polynomials. Such problems have acquired increased interest to some degree because of applications in engineering and science, where constraints arise because of physics, and also because of increased theoretical understanding. In this talk I will focus on two topics where I am working, the CDT (Celis Dennis Tapia) problem, which concerns the solution of a system of quadratic inequalities over R^n, and mixed-integer polynomial optimization problems over graphs with structural sparsity, i.e. low treewidth. We will describe our results, but also we will discuss how these problems relate to classical problems in various branches of mathematics.
20151029T133000
20151029T143000
0
Goldman Lecture Series: Daniel Bienstock (Columbia University) @ Krieger 205
ai1ec-5975@engineering.jhu.edu/ams
20180716T042154Z
Graduate Research Opportunities in AMS
This seminar will familiarize Master’s and PhD students from AMS or other WSE departments with the research performed by the AMS faculty. It will be composed of a research overview presented by Professor Laurent Younes and consisting of snap-shot descriptions of research projects currently underway as well as others that are ripe for students to tackle immediately, followed by a discussion session between faculty and students.
Research Opportunities in Applied Mathematics and Statistics
20151105T133000
20151105T143000
0
Seminar: The AMS Faculty
ai1ec-5734@engineering.jhu.edu/ams
20180716T042154Z
Seminar
Scaling and Generalizing Variational Inference
Latent variable models have become a key tool for the modern statistician, letting us express complex assumptions about the hidden structures that underlie our data. Latent variable models have been successfully applied in numerous fields.
The central computational problem in latent variable modeling is posterior inference, the problem of approximating the conditional distribution of the latent variables given the observations.
Posterior inference is central to both exploratory tasks and predictive tasks. Approximate posterior inference algorithms have revolutionized Bayesian statistics, revealing its potential as a usable and general-purpose language for data analysis.
Bayesian statistics, however, has not yet reached this potential.
First, statisticians and scientists regularly encounter massive data sets, but existing approximate inference algorithms do not scale well.
Second, most approximate inference algorithms are not generic; each must be adapted to the specific model at hand.
In this talk I will discuss our recent research on addressing these two limitations. I will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets. I will demonstrate its application to probabilistic topic models of text conditioned on millions of articles. Then I will discuss black box variational inference. Black box inference is a generic algorithm for approximating the posterior. We can easily apply it to many models with little model-specific derivation and few restrictions on their properties. I will demonstrate its use on a suite of nonconjugate models of longitudinal healthcare data.
Biography:
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013).
20151112T133000
20151112T143000
0
Seminar: David Blei (Columbia University) @ Krieger 205
ai1ec-5645@engineering.jhu.edu/ams
20180716T042154Z
Algebraic and Geometric ideas in the theory of Linear Optimization”
Abstract: Linear optimization is undeniably a central tool of applied mathematics with applications in a wide
range of topics, from statistical regression to image processing. The theory of linear optimization has many
beautiful geometric and algebraic topics and it is still a source of many fascinating mathematical open problems.
In this talk I will present several advances from the past 10 years in the theory of linear optimization.
These results include new results on the complexity of the simplex method, the structure of central
paths of interior point methods, and about the geometry of some less well-known iterative techniques.
One interesting feature of these new theorems is that they connect this very applied algorithmic field with
seemingly far away “pure” topics like algebraic geometry, differential geometry, and combinatorial topology.
This panoramic talk is geared for students and the non-expert faculty member. I will summarize work by many
authors, including results that are my own joint work with subsets of the following people A. Basu, J. Haddock,
Junod, S. Klee, B. Sturmfels, and C. Vinzant.
20151119T133000
20151119T143000
0
Seminar: Jesus De Loera (University of California, Davis) @ Whitehead 304
ai1ec-5988@engineering.jhu.edu/ams
20180716T042154Z
Comparative Effectiveness Research of Environmental Exposures: Connecting the Dots with Big Data
Comparative effectiveness research increasingly depends on the analysis of a rapidly expanding universe of observational data made possible by the growing integration of administrative claims data (e.g. Medicare or SEER-Medicare claims) with environmental health exposures (e.g. emissions from power plants, air pollution for monitoring stations), with survey and census data (e.g. population demographics).
We are interested in addressing questions that attempt to connect the dots between environmental exposures and human health, such as: Can increased noise levels near airports cause higher rates of cardiovascular disease or stroke? Do even moderate increases in air pollution from sources such as automobiles and industrial smokestacks have a measurable effect on a community’s death rate? What are the most likely causes of hospitalizations during heat waves?
Development of statistical methods is needed to be able to handle large, messy data sets, integrate them, and extract meaningful conclusions. In this talk we will review some of these tatistical methods aimed at making causal inferences on the effectiveness of environmental interventions with such large observational data structures.
Biography- Dr. Francesca Dominici, PhD
Academic Career
Francesca Dominici is a Professor in the Department of Biostatistics at the Harvard School of Public Health and the Senior Associate Dean for Research. Dr. Dominici received her Ph.D. in Statistics from the University of Padua, Italy in 1997. During her PhD, she spent two years as a visiting PhD student at Duke University, NC, USA. In 1997 she went to the Bloomberg School of Public Health as a post-doctoral fellow. In 1999 she was appointed Assistant Professor at the Bloomberg School of Public Health and in 2007 she was promoted to Full Professor with Tenure. In 2009 she moved to Harvard School of Public Health as a tenured Professor of Biostatistics, was appointed Associate Dean of Information Technology in 2010, and Senior Associate Dean for Research in 2013.
Dr. Dominici’s research has focused on the development of statistical methods for the analysis of large observational data with the ultimate goal of addressing important questions in environmental health science, health related impacts of climate change, and comparative effectiveness research. She is an expert in Bayesian methods, longitudinal data analysis, confounding adjustment, causal inference, and Bayesian hierarchical models. She has extensive experience on the development of statistical methods and their applications to environmental epidemiology, implementation science and health policy, outcome research and patient safety, and comparative effectiveness research.
Research
Dr. Dominici has authored more than 120 peer-reviewed publications. She is the PI, together with Dr. Xihong Lin, of a NCI P01 project entitled “Statistical Informatics for Cancer Research” (http://www.hsph.harvard.edu/statinformatics/index.html). She is the PI of a Project called “A National Study to Assess Susceptibility, Vulnerability and Effect Modification of Air Pollution Health Risks” as part of the Harvard EPA Center entitled “Air Pollution Mixtures: Health Effects Across Life Stages” (PI: Dr. Koutrakis) She is also the PI of several EPA/NIH/HEI funded projects aimed at developing statistical methods and conducting nation-wide epidemiological studies on the health effects of air pollution. Most recently, she has become more involved in comparative effectiveness research collaborating with investigators at Dana Farber Cancer Institute. With her colleagues she is developing statistical methods for causal inference and propensity score matching to compare health care delivery systems in end of life cancer, with a special focus on glioblastoma and pancreatic cancer. Dr. Dominici also oversees the management and the analysis of several administrative databases, including Part A CMS files and SEER-Medicare, which are linked to air pollution and weather and socioeconomic data.
Education and Mentoring
Dr. Dominici is teaching the course Bio249 entitled “Bayesian Methodology in Biostatistics” at HSPH. Previously she taught Analysis of Longitudinal Data, and Multilevel Statistical Models while a faculty member at Johns Hopkins University. She has been the primary advisory of 9 PhD students and 13 post-doctoral fellows. She is a passionate mentor of junior faculty.
Diversity
Dr. Dominici is committed to diversity. Together with Dr. Linda P. Fried (now Dean of the Mailman School of Public Health at Columbia University), she has co-chaired the University Committee of the Status of Women at Johns Hopkins University. From this experience she wrote a paper entitled “So Few Women Leaders” Academe, July-August 2009” (http://www.aaup.org/article/so-few-women-leaders – .Ubx4SZWQma4). In 2009, she was awarded the Diversity Recognition Award by the President of Johns Hopkins University. Recently, she has been giving lectures and moderated panel discussions on work-family balance across Harvard (see http://news.harvard.edu/gazette/story/2012/11/having-it-all-at-harvard/). Currently she is the chair (with Dr. Burleigh) of the University Committee for the Advancement of Women Faculty at HSPH.
Administration
In her role as Associate Dean of Information Technology, Dr. Dominici has led new initiatives at HSPH regarding research computing. More specifically, she led a MOU between our school and research computing (RC) facility at the Faculty of Arts and Science (FASRC) http://rc.fas.harvard.edu/ enabling HSPH faculty to access the FAS computing facilities. HSPH faculty are treated equally to FAS faculty in terms of priority of access, ticket turn-around, and access to shared facilities and shared licenses. See http://rc.fas.harvard.edu/hsph-overview/ for details.
Service
Dr. Dominici has served on a number of National Academies’ committees, including the Committee on Research Direction in Human Biological Effects of Low Level Ionizing Radiation; the Committee on Gulf War and Health: Review of the Medical Literature Relative to Gulf War Veterans’ Health; the Committee to Review the Federal Response to the Health Effects Associated with the Gulf of Mexico Oil Spill; the Committee on Secondhand Smoke Exposure and Acute Coronary Events; the Committee to Review ATSDR’s Great Lakes Report; the Committee on Making Best Use of the Agent Orange Exposure Reconstruction Model; the Committee on Gulf War and Health; the Committee to Assess Potential Health Effects from Exposures to PAVE PAWS Low-Level Phased-array Radiofrequency Energy; and the Committee on the Utility of Proximity-Based Herbicide Exposure Assessment in Epidemiologic Studies of Vietnam Veterans.
Dr. Dominici has received numerous recognitions, including the Florence Nightingale David award, sponsored jointly by the Committee of Presidents of Statistical Societies and Caucus for Women in Statistics 2015; Mathematics for Planet Earth Award Lecture, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI) 2013; Diversity Recognition Award, Johns Hopkins University, 2009; Myrto Lefkopoulou Distinguished Lectureship Award, Department of Biostatistics, Harvard School of Public Health, 2007; Gertrude Cox Award, Washington DC Chapter of the American Statistical Association and RTI International, 2007; Mortimer Spiegelman Award, Statistics Section of the American
Public Health Association, 2006; Dean’s Lecture, Bloomberg School of Public Health, 2007; and an Invitation to Address the Royal Statistical Society, London, UK, 2002.
She is a member of numerous professional societies, including the American Statistical Association, the International Biometric Society, and the International Society for Environmental Epidemiology. She is the Senior Editor of Chapman & Hall/CRC Texts in Statistical Science Series and Associate Editor of the Journal of the Royal Statistical Society.
20151203T133000
20151203T143000
0
Wierman Lecture Series: Francesca Dominici (Harvard University) @ Arellano Theater
ai1ec-6099@engineering.jhu.edu/ams
20180716T042154Z
Title: Feature allocations, probability functions, and paintboxes
Abstract:
Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as “feature allocation.” The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of “exchangeable feature probability functions” and “feature paintboxes” that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.
20160128T133000
20160128T143000
0
Seminar: Tamara Broderick (MIT) @ Whitehead 304
ai1ec-6197@engineering.jhu.edu/ams
20180716T042154Z
Title : On the spectra of direct sums and Kronecker products of side length 2 hypermatrices and related algorithmic problems in data science.
Abstract:
We present elementary method for obtaining the spectral decomposition of hypermatrices generated by arbitrary combinations of Kronecker products and direct sums of cubic hypermatrices having side length 2. The method is based on a generalization of Parseval’s identity. We use the general formulation of Parseval’s identity to introduce hypermatrix Fourier transforms and discrete Fourier hypermatrices. We extend to hypermatrices orthogonalization procedures and Sylvester’s classical Hadamard matrix construction. We conclude the talk with illustrations of spectral decompositions of adjacency hypermatrices of finite groups and a proof of a hypermatrix Rayleigh quotient inequality.
This is a joint work with Yuval Filmus.
20160202T133000
20160202T143000
0
Seminar: Edinah Gnang (Purdue University) @ Whitehead 304
ai1ec-6098@engineering.jhu.edu/ams
20180716T042154Z
Title: Using Integer Programming for Solving Nonconvex Quadratic Programs with Box Constraints
We discuss effective computational techniques for solving nonconvex quadratic programs with box constraints (BoxQP). Cutting planes obtained from the well-known Boolean Quadric Polytope may be applied in this context, and we demonstrate the equivalence between the Chvatal-Gomory closure of a natural linear relaxation of (BoxQP) and the relaxation of the Boolean Quadric Polytope consisting of the odd-cycle inequalities. By using these cutting planes effectively at nodes of the branch-and-bound tree, in conjunction with additional integrality-based branching and a strengthened convex quadratic relaxation, we demonstrate that we can effectively solve a well-known family of test instances. Our new solver, GuBoLi, is orders of magnitude faster than existing commercial and open-source solvers.
20160204T133000
20160204T143000
0
Seminar: Jeff Linderoth (University of Wisconsin- Madison) @ Whitehead 304
ai1ec-6185@engineering.jhu.edu/ams
20180716T042154Z
Title: Important Features PCA (IF-PCA) for Large-Scale Inference, with Applications in Gene Microarrays
Abstract:
Identification of sample labels is a major problem in statistics with many applications. In the Big Data era, it faces two main challenges: 1. the number of features is much larger than the sample size; 2. the signals are sparse and weak, masked by large amount of noise.
We propose a new tuning-free clustering procedure for high-dimensional data, Important Features PCA (IF-PCA). IF-PCA consists of a feature selection step, a PCA step, and a k-means step. The first two steps reduce the data dimensions recursively, while the main information is preserved. As a consequence, IF-PCA is fast and accurate, producing competitive performance in application to 10 gene microarray data sets.
We also generalize IF-PCA for the signal recovery and hypothesis testing problems. With IF-PCA and two aggregation methods, we find the statistical limits for these three problems.
20160209T133000
20160209T143000
0
Seminar: Wanjie Wang (University of Pennsylvania) @ Whitehead 304
ai1ec-6344@engineering.jhu.edu/ams
20180716T042154Z
Stochastic evolutionary modeling of cancer development and resistance to treatment
Cancer is the result of a stochastic evolutionary process characterized by the accumulation of mutations that are responsible for tumor growth, immune escape, and drug resistance, as well as mutations with no effect on the phenotype. Stochastic modeling can be used to describe the dynamics of tumor cell populations and obtain insights into the hidden evolutionary processes leading to cancer. I will present recent approaches that use branching process models of cancer evolution to quantify intra-tumor heterogeneity and the development of drug resistance, and their implications for interpretation of cancer sequencing data and the design of optimal treatment strategies.
20160210T133000
20160210T143000
0
Seminar: Ivana Bozic (Harvard University)
free
ai1ec-6188@engineering.jhu.edu/ams
20180716T042154Z
Title: Universality in numerical computations with random data
Abstract: This talk will concern recent progress on the statistical analysis of numerical algorithms with random initial data. In particular, with appropriate randomness, the fluctuations of the iteration count (halting time) of numerous numerical algorithms have been demonstrated to be universal, i.e., independent of the distribution on the initial data. This phenomenon has given new insights into random matrix theory. Furthermore, estimates from random matrix theory allow for fluctuation limit theorems for simple algorithms and halting time estimates for others. The universality in the halting time is directly related to the experimental work of Bakhtin and Correll on neural computation and human decision-making times.
20160211T133000
20160211T143000
0
Seminar: Tom Trogdon (New York University) @ Whitehead 304
free
ai1ec-6348@engineering.jhu.edu/ams
20180716T042154Z
Bringing Moneyball to Campaigns
Over the past decade, an entire industry has grown up around the use of data to help campaigns be more efficient and effective. Whether it is trying to identify that last persuadable voter or allocating resources to get your supporters out to the polls, today’s campaigns often rely on a staff of data analysts, statisticians and modelers. Together, data and analytics help identify which voters to target and what actions to take to generate the votes where they are needed.
In this talk I will introduce the tools and techniques involving data, analytics, and experimentation used by campaigns. We will discuss where many of these techniques came from and how they evolved in politics to culminate in President Obama’s 2012 campaign. This survey of the data-driven campaigns will include polling, micro-targeting and random controlled experiments.
Please join HUSAM in welcoming Dr. Ben Yuhas to the Johns Hopkins University community!
20160211T190000
20160211T200000
0
HUSAM: Dr. Ben Yuhas (Principal of the Yuhas Consulting Group, LLC) @ Gilman 50
free
ai1ec-6198@engineering.jhu.edu/ams
20180716T042154Z
Title: Controlling a Thermal Fluid: Theoretical and Computational Issues
Abstract: We first discuss the problem of designing a feedback law which locally stabilizes a two dimensional thermal fluid modeled by the Boussinesq equations. The problem was motivated by the design and operation of low energy consumption buildings. The investigation of stability for a fluid flow in the natural convection problem is important in the theory of hydrodynamical stability. The challenge of stabilization of the Boussinesq equations arises from the stabilization of the Navier-Stokes equations and its coupling with the convection-diffusion equation for temperature. In our current work, we are interested in stabilizing a possible unstable steady state solution to the Boussinesq equations on a bounded and connected domain. We show that a finite number of controls acting on a part of the boundary through Neumann/Robin boundary conditions is sufficient to stabilize the full nonlinear equations in the neighborhood of this steady state solution. Dirichlet boundary conditions are imposed on the rest of the boundary. Moreover, we prove that a stabilizing feedback control law can be obtained based on the partial estimation of the system state by solving an extended Kalman filter problem for the linearized Boussinesq equations. In particular, a reduced order model is derived to construct a finite dimensional estimator. Numerical results are provided to illustrate the idea. In the end, we discuss the problem of control design for the Boussinesq equations with zero diffusivity and its application to optimal mixing, mass and energy transport during processing.
20160216T133000
20160216T143000
0
Seminar: Weiwei Hu (University of Minnesota) @ Whitehead 304
ai1ec-6082@engineering.jhu.edu/ams
20180716T042154Z
Robust and efficient collocation methods for parameterized models
Monte Carlo (MC) methods for the construction of polynomial approximations are effective tools for building a computational surrogate of the parametric variation for a model response. In this talk we investigate least-squares regularization of noisy data and compressive sampling recovery of sparse representations. We wish to minimize the number of samples required for a stable and accurate procedure. We propose an algorithm for a particular kind of weighted Monte Carlo approximation method based on sampling from the pluripotential equilibrium measure. Standard MC methods suffer from poor stability and accuracy for high-order approximations, but the properties of the equilibrium measure allow us to derive quasi-optimal statements of mathematical recoverability in both over- or undersampled regression problems. We also show that such an approach typically yields very stable, high-order computational algorithms for parameterized PDE approximation. We present theoretical analysis to motivate the algorithm, and numerical results to illustrate that equilibrium measure-based approaches are superior to standard MC methods in many situations of interest, notably in high-dimensional scenarios.
20160218T133000
20160218T143000
0
Seminar: Akil Narayan (University of Utah) @ Whitehead 304
ai1ec-6199@engineering.jhu.edu/ams
20180716T042154Z
Title:
Structure-Enhancing Algorithms for Statistical Learning Problems
Abstract:
For many problems in statistical machine learning and data-driven decision-making, massive datasets necessitate the use of scalable algorithms that deliver sensible (interpretable) and statistically sound solutions. In this talk, we discuss several scalable algorithms that directly promote well-structured solutions in two related contexts: (i) sparse high-dimensional linear regression, and (ii) low-rank matrix completion, both of which are particularly relevant in modern machine learning. In the context of linear regression, we study several boosting algorithms – which directly promote sparse solutions – from the perspective of modern first-order methods in convex optimization. We use this perspective to derive the first-ever computational guarantees for existing boosting methods and to develop new algorithms with associated computational guarantees as well. In the context of matrix completion, we present an extension of the Frank-Wolfe method in convex optimization that is designed to induce near-optimal low-rank solutions for regularized matrix completion problems, and we derive computational guarantees that trade off between low-rank structure and data fidelity. For both problem contexts, we present computational results using datasets from microarray and recommender system applications.
20160223T133000
20160223T143000
0
Seminar: Paul Grigas (MIT) @ Whitehead 304
free
ai1ec-6191@engineering.jhu.edu/ams
20180716T042154Z
Title: Recent theoretic and algorithmic advances in graph matching
Abstract: Inference across multiple graphs arises naturally in disciplines as varied as neuroscience, physics, and sociology. In a number of methodologies for joint inference across graphs, however, it is assumed that an explicit vertex correspondence is a priori known across the vertex sets of the graphs. While this assumption is often reasonable, in practice these correspondences may be unobserved and/or errorfully observed, and graph matching—aligning a pair of graphs to minimize their edge disagreements—is used to align the graphs before performing subsequent inference. Graph matching is a computationally challenging and well-studied problem, but few existing algorithms have theoretical support for their performance. For tractability, many algorithms begin by relaxing the problem’s binary constraints, thus rendering applicable gradient-descent methodologies. We develop a state-of-the-art algorithm for solving an indefinite relaxed graph matching problem, and we show that under mild model assumptions, our indefinite relaxation (when solved exactly) almost always uncovers the optimal permutation, while the commonly used convex relaxation almost always fails to identify the optimal permutation. We highlight some of the practical and theoretical implications of these results on real and synthetic data, and we discuss recent work towards formalizing the connection between graph matching and pairwise mutual information.
20160225T133000
20160225T143000
0
Seminar: Vince Lyzinski (JHU) @ Whitehead 304
free
ai1ec-6111@engineering.jhu.edu/ams
20180716T042154Z
Mediation: From Intuition to Data Analysis.
Modern causal inference links the “top-down” representation of causal intuitions and “bottom-up” data analysis with the aim of choosing policy. Two innovations that proved key for this synthesis were a formalization of Hume’s counterfactual account of causation using potential outcomes (due to Jerzy Neyman), and viewing cause effect relationships via directed acyclic graphs (due to Sewall Wright). I will briefly review how a synthesis of these two ideas was instrumental in formally representing the notion of “causal effect” as a parameter in the language of potential outcomes, and discuss a complete identification theory linking these types of causal parameters and observed data, as well as approaches to estimation of the resulting statistical parameters.
I will then describe, in more detail, how my collaborators and I are applying the same approach to mediation, the study of effects along particular causal pathways. I consider mediated effects at their most general: I allow arbitrary models, the presence of hidden variables, multiple outcomes, longitudinal treatments, and effects along arbitrary sets of causal pathways. As was the case with causal effects, there are three distinct but related problems to solve — a representation problem (what sort of potential outcome does an effect along a set of pathways correspond to), an identification problem (can a causal parameter of interest be expressed as a functional of observed data), and an estimation problem (what are good ways of estimating the resulting statistical parameter). I report a complete solution to the first two problems, and progress on the third. In particular, my collaborators and I show that for some parameters that arise in mediation settings, triply robust estimators exist, which rely on an outcome model, a mediator model, and a treatment model, and which remain consistent if any two of these three models are correct.
Some of the reported results are a joint work with Eric Tchetgen, Caleb Miles, Phyllis Kanki, and Seema Meloni.
20160303T133000
20160303T143000
0
Seminar: Ilya Shpitser (Johns Hopkins University) @ Whitehead 304
free
ai1ec-6462@engineering.jhu.edu/ams
20180716T042154Z
Title: Arbitrage-Free Pricing of XVA.
Abstract: We develop a framework for computing the total valuation adjustment (XVA) of a European claim accounting for funding costs, counterparty credit risk, and collateralization. Based on no-arbitrage arguments, we derive backward stochastic differential equations
(BSDEs) associated with the replicating portfolios of long and short positions in the claim. This leads to the definition of buyer’s and seller’s XVA, which in turn identify a no-arbitrage interval. In the case that borrowing and lending rates coincide, we provide a fully explicit expression for the uniquely determined XVA, expressed as a percentage of the price of the traded claim, and for the corresponding replication strategies. In the general case of asymmetric funding, repo and collateral rates, we study the semi-linear partial differential equation (PDE) characterizing buyer’s and seller’s XVA and show the existence of a unique classical solution to it. To illustrate our results, we conduct a numerical study demonstrating how funding costs, repo rates, and counterparty risk contribute to determine the total valuation adjustment. This talk is based on joint works with Agostino Capponi (Columbia) and Stephan Sturm (WPI).
20160324T133000
20160324T143000
0
Seminar: Maxim Bichuch (JHU) @ Whitehead 304
free
ai1ec-6116@engineering.jhu.edu/ams
20180716T042154Z
Title: Co-clustering of nonsmooth graphons
Abstract:
Theoretical results are becomming known for community detection and clustering of networks; however, these results assume an idealized generative model that is unlikely to hold in many settings. Here we consider exploratory co-clustering of a bipartite network, where the rows and columns of the adjacency matrix are assumed to be samples from an arbitrary population. This is equivalent to assuming that the data is generated from a nonparametric model known as a graphon. We show that co-clusters found by any method can be extended to the row and column populations, or equivalently that the estimated blockmodel approximates a blocked version of the generative graphon, with generalization error bounded by n^{-1/2}. Analogous results are also shown for degree-corrected co-blockmodels and random dot product bipartite graphs, with error rates depending on the dimensionality of the latent variable space.
20160331T133000
20160331T143000
0
Seminar: David Choi (CMU) @ Whitehead 304
free
ai1ec-6466@engineering.jhu.edu/ams
20180716T042154Z
Title: Distributed proximal gradient methods for cooperative multi-agent consensus optimization
Abstract:
In this talk, I will discuss decentralized methods for solving cooperative multi-agent consensus optimization problems. Consider an undirected network of agents, where only those agents connected by an edge can directly communicate with each other. The objective is to minimize the sum of agent-specific composite convex functions, i.e., each term in the sum is a private cost function belonging to an agent. In the first part, I will discuss the unconstrained case, and in the second part I will focus on the constrained case, where each agent has a private conic constraint set. For the constrained case the optimal consensus decision should lie in the intersection of these private sets. This optimization model abstracts a number of applications in machine learning, distributed control, and estimation using sensor networks. I will discuss different types of distributed algorithms; in particular, I will describe methods based on inexact augmented Lagrangian, and linearized ADMM. I will provide convergence rates both in sub-optimality error and consensus violation; and also examine the effect of underlying network topology on the convergence rates of the proposed decentralized algorithms.
Joint work with Ph.D. students Zi Wang, Erfan Yazdandoost, and Shiqian Ma from Chinese University of Hong Kong, and Garud Iyengar from Columbia University.
20160407T133000
20160407T143000
0
Seminar: Serhat Aybat (Penn State University) @ Whitehead 304
free
ai1ec-6084@engineering.jhu.edu/ams
20180716T042154Z
Title: Scalable Bayesian Models of Interacting Time Series
Abstract:
Data streams of increasing complexity and scale are being collected in a variety of fields ranging from neuroscience, genomics, and environmental monitoring to e-commerce. Modeling the intricate and possibly evolving relationships between the large collection of series can lead to increased predictive performance and domain-interpretable structures. For scalability, it is crucial to discover and exploit sparse dependencies between the data streams. Such representational structures for independent data sources have been studied extensively, but have received limited attention in the context of time series. In this talk, we present a series of Bayesian models for capturing such sparse dependencies via clustering, graphical models, and low-dimensional embeddings of time series. We explore these methods in a variety of applications, including house price modeling and inferring networks in the brain.
We then turn to observed interaction data, and briefly touch upon how to devise statistical network models that capture important network features like sparsity of edge connectivity. Within our Bayesian framework, a key insight is to move to a continuous-space representation of the graph, rather than the typical discrete adjacency matrix structure. We demonstrate our methods on a series of real-world networks with up to hundreds of thousands of nodes and millions of edges.
Bio:
Emily Fox is currently the Amazon Professor of Machine Learning in the Statistics Department at the University of Washington. She received a S.B. in 2004 and Ph.D. in 2009 from the Department of Electrical Engineering and Computer Science at MIT. She has been awarded a Sloan Research Fellowship (2015), an ONR Young Investigator award (2015), an NSF CAREER award (2014), the Leonard J. Savage Thesis Award in Applied Methodology (2009), and the MIT EECS Jin-Au Kong Outstanding Doctoral Thesis Prize (2009). Her research interests are in large-scale Bayesian dynamic modeling and computations.
20160414T133000
20160414T143000
0
Seminar: Emily Fox (University of Washington) @ Whitehead 304
free
ai1ec-6032@engineering.jhu.edu/ams
20180716T042154Z
Title: Quickest detection in correlated and coupled systems
Abstract:
In this works we consider the problem N-dimensional quickest detection in correlated and coupled systems. The objective is to detect the first time that the system of N sensors undergoes a change with a one shot communication to the central fusion center.
In both cases it is seen that the minimum of N – cumulative sum tests with appropriately chosen thresholds is asymptotically optimal in managing the trade off between a small detection delay and a large mean time to first False alarm as the mean time to the first false alarm increases without bound. In the former case a Linear penalty is used for detection delay while in the latter a Kulback- Leibler distance of the measure before and after regime switching is used.
20160421T133000
20160421T143000
0
Seminar: Olympia Hadjiliadis (City University of New York, Brooklyn) @ Whitehead 304
free
ai1ec-6440@engineering.jhu.edu/ams
20180716T042154Z
Movie Reconstruction from Brain Signals: “Mind-Reading”
In a thrilling breakthrough at the intersection of neuroscience and statistics, penalized Least Squares methods have been used to construct a “mind-reading” algorithm that reconstructs movies from fMRI brain signals. The story of this algorithm is a fascinating tale of the interdisciplinary research that led to the development of the system which was selected as one of Time Magazine’s 50 Best Inventions of 2011. Talk 1: Movie Reconstruction from Brain Signals: “Mind-Reading”
20160427T133000
20160427T143000
0
Duncan Lecture Series: Bin Yu (University of California Berkeley) @ Gilman 50
free
ai1ec-6112@engineering.jhu.edu/ams
20180716T042154Z
Unveiling the mysteries in spatial gene expression
Genome-wide data reveal an intricate landscape where gene activities are highly differentiated across diverse spatial areas. These gene actions and interactions play a critical role in the development and function of both normal and abnormal tissues. As a result, understanding spatial heterogeneity of gene networks is key to developing treatments for human diseases. Despite the abundance of recent spatial gene expression data, extracting meaningful information remains a challenge for local gene interaction discoveries. In response, we have developed staNMF, a method that combines a powerful unsupervised learning algorithm, nonnegative matrix factorization (NMF), with a new stability criterion that selects the size of the dictionary. Using staNMF, we generate biologically meaningful Principle Patterns (PP), which provide a novel and concise representation of Drosophila embryonic spatial expression patterns that correspond to pre-organ areas of the developing embryo. Furthermore, we show how this new representation can be used to automatically predict manual annotations, categorize gene expression patterns, and reconstruct the local gap gene network with high accuracy. Finally, we discuss on-going crispr/cas9 knock-out experiments on Drosophila to verify predicted local gene-gene interactions involving gap-genes. An open-source software is also being built based on SPARK and Fiji.
This talk is based on collaborative work of a multi-disciplinary team (co-lead Erwin Frise) from the Yu group (statistics) at UC Berkeley, the Celniker group (biology) at the Lawrence Berkeley National Lab (LBNL), and the Xu group (computer science) at Hsinghua Univ.
20160428T133000
20160428T143000
0
Duncan Lecture Series: Bin Yu (University of California Berkeley) @ Mergenthaler 111
free
ai1ec-6970@engineering.jhu.edu/ams
20180716T042154Z
20160901T133000
20160901T143000
0
Seminar: Getting to Know You @ Whitehead 304
free
ai1ec-7268@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Statistics of the Stability Bounds in the Phase Retrieval Problem
In this talk we present a local-global Lipschitz analysis of the phase retrieval problem. Additionally we present tentative estimates of the tail-bound for the distribution of the global Lipschitz constants. Specifically it is known that if the frame {f1,…,fm} for Cn is phase retrievable then there are constants a0 and b0 so that for every x,y∈Cn: a0∣∣xx*-yy*∣∣12≤∑k=1m∣∣〈x,fk〉∣2-∣〈y,fk〉∣2∣2≤b0∣∣xx*-yy*∣∣12. Assumef1,…,fm are independent realizations with entries from CN(0,1). In this talk we establish estimates for the probability P(a0>a).
20160907T150000
20160907T160000
0
Data Seminar: Radu Balan (University of Maryland College Park) @ Shaffer 100
free
ai1ec-6966@engineering.jhu.edu/ams
20180716T042154Z
Title: To Replace or Not to Replace in Finite Population Sampling
Abstract:
We revisit the classical result in finite population sampling which states that in equally-likely “simple” random sampling the sample mean is more reliable when we do not replace after each draw. In this talk, we review a classical result for the equally likely sampling case. Then we investigate if and when the same is true for samples where it may no longer be true that each member of the population has an equal chance of being selected, and when the population mean is estimated using the Horvitz-Thompson inverse probability weighing to produce an unbiased estimator. For a certain class of sampling schemes, we are able to obtain convenient expressions for the variance of the sample mean and surprisingly, we find that for some selection distributions a more reliable estimate of the population mean will happen by replacing after each draw. We show for selection distributions lying in a certain polytope the classical result prevails.
This is joint work with Fred Torcaso.
20160908T133000
20160908T143000
0
Seminar: Dan Naiman (JHU) @ Whitehead 304
free
ai1ec-7272@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Finite-Sample Bounds for Geometric Multiresolution Analysis
20160914T150000
20160914T160000
0
Data Seminar: Nate Strawn (Georgetown University) @ Krieger 309
free
ai1ec-6996@engineering.jhu.edu/ams
20180716T042154Z
Stochastic Newton Methods for Machine Learning
Optimization methods play a crucial role in supervised learning where they are employed to solve problems in very high dimensional parameter spaces. The optimization problems are inherently stochastic and often involve huge data sets. There has recently been much interest in sub-sampled Newton methods for these types of applications. The methods use approximations to the gradient and Hessian in a way that strikes a balance between computational effort and speed of convergence. We provide a review and analysis of sub-sampled Newton methods, which include Newton-sketch and non-uniform subsampling techniques, and illustrate their effectiveness on some large-scale machine learning applications.
Jorge Nocedal is the David and Karen Sachs Professor of Industrial Engineering and Management Sciences at Northwestern University. He received his PhD in Mathematical Sciences from Rice University and was a postdoctoral fellow at the Courant Institute. His research is in nonlinear optimization with applications to machine learning. Over the years, his work has spanned algorithms, analysis and software. He is a SIAM Fellow, has been an invited speaker at the International Congress of Mathematicians, and was awarded the 2012 George B. Dantzig Prize.
20160915T133000
20160915T143000
0
Goldman Lecture Series: Jorge Nocedal (Northwestern University)- Maryland 110
free
ai1ec-7352@engineering.jhu.edu/ams
20180716T042154Z
The JHU Actuarial Club will host an event on September 16th. The speaker JHU alum, Matt Sedlock, is currently working at Mass Mutual. He will share his experience working in the actuarial industry and discuss the recruiting process from Mass Mutual
20160916T180000
20160916T200000
0
What is an Actuary? @ Arellano Theatre, Levering Hall
free
ai1ec-7276@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20160921T150000
20160921T160000
0
Data Seminar: Charles Meneveau (JHU) @ Krieger 309
free
ai1ec-6964@engineering.jhu.edu/ams
20180716T042154Z
High-Dimensional Analysis of Stochastic Algorithms for Convex and Nonconvex Optimization: Limiting Dynamics and Phase Transitions
Abstract
We consider efficient iterative methods (e.g., stochastic gradient descent, randomized Kaczmarz algorithms, iterative coordinate descent) for solving large-scale optimization problems, whether convex or nonconvex. A flurry of recent work has focused on establishing their theoretical performance guarantees. This intense interest is spurred on by the remarkably impressive empirical performance achieved by these low-complexity and memory-efficient methods.
In this talk, we will present a framework for analyzing the exact dynamics of these methods in the high-dimensional limit. For concreteness, we consider two prototypical problems: regularized linear regression (e.g. LASSO) and sparse principal component analysis. For each case, we show that the time-varying estimates given by the algorithms will converge weakly to a deterministic “limiting process” in the high-dimensional (scaling and mean-field) limit. Moreover, this limiting process can be characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithms. For example, performance metrics such as the MSE, the cosine similarity and the misclassification rate in sparse support recovery can all be obtained by examining the deterministic limiting process. A steady-state analysis of the nonlinear PDE also reveals interesting phase transition phenomenons related to the performance of the algorithms. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
What makes our analysis tractable is the notion of exchangeability, a fundamental property of symmetry that is inherent in many of the optimization problems encountered in signal processing and machine learning.
Bio
Yue M. Lu was born in Shanghai. After finishing undergraduate studies at Shanghai Jiao Tong University, he attended the University of Illinois at Urbana-Champaign, where he received the M.Sc. degree in mathematics and the Ph.D. degree in electrical engineering, both in 2007. He was a Research Assistant at the University of Illinois at Urbana-Champaign, and has worked for Microsoft Research Asia, Beijing, and Siemens Corporate Research, Princeton, NJ. Following his work as a postdoctoral researcher at the Audiovisual Communications Laboratory at Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, he joined Harvard University in 2010, where he is currently an Associate Professor of Electrical Engineering at the John A. Paulson School of Engineering and Applied Sciences.
He received the Most Innovative Paper Award (with Minh N. Do) of IEEE International Conference on Image Processing (ICIP) in 2006, the Best Student Paper Award of IEEE ICIP in 2007, and the Best Student Presentation Award at the 31st SIAM SEAS Conference in 2007. Student papers supervised and coauthored by him won the Best Student Paper Award (with Ivan Dokmanic and Martin Vetterli) of IEEE International Conference on Acoustics, Speech and Signal Processing in 2011 and the Best Student Paper Award (with Ameya Agaskar and Chuang Wang) of IEEE Global Conference on Signal and Information Processing (GlobalSIP) in 2014.
He has been an Associate Editor of the IEEE Transactions on Image Processing since 2014, an Elected Member of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee since 2015, and an Elected Member of the IEEE Signal Processing Theory and Methods Technical Committee since 2016. He received the ECE Illinois Young Alumni Achievement Award in 2015.
20160922T133000
20160922T143000
0
Seminar: Yue Lu (Harvard) @ Whitehead 304
free
ai1ec-7280@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
From Molecular Dynamics to Large Scale Inference
Molecular models and data analytics problems give rise to very large systems of stochastic differential equations (SDEs) whose paths are designed to ergodically sample multimodal probability distributions. An important challenge for the numerical analyst (or the data scientist, for that matter) is the design of numerical procedures to generate these paths. One of the interesting ideas is to construct stochastic numerical methods with close attention to the error in the invariant measure. Another is to redesign the underlying stochastic dynamics to reduce bias or locally transform variables to enhance sampling efficiency. I will illustrate these ideas with various examples, including a geodesic integrator for constrained Langevin dynamics [1] and an ensemble sampling strategy for distributed inference [2].
20160923T150000
20160923T160000
0
Data Seminar: Ben Leimkuhler (University of Edinburgh) @ TBD
free
ai1ec-7292@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20160928T150000
20160928T160000
0
Data Seminar: Rene Vidal (JHU) @ Krieger 309
free
ai1ec-7204@engineering.jhu.edu/ams
20180716T042154Z
Title: Edge-coloring Multigraphs
Abstract: Graph (vertex) coloring is a central area of discrete math; however, it is NP-hard even to approximate the chromatic number. Edge-coloring can be seen as a special case of vertex coloring. As such, we may hope that computing (or approximating) the edge chromatic number is easier; in fact, it is. We will survey a number of theorems and conjectures on edge-coloring. These include many instances when the edge-chromatic number satisfies a trivial lower bound with equality, such as when it equals the graph’s maximum degree. I will also mention some of my recent work in this area, and introduce one of the main tools, Tashkinov trees, which rely on a beautiful double induction.
20160929T133000
20160929T143000
0
Seminar: Dan Cranston (Virginia Commonwealth University) @ Whitehead 304
free
ai1ec-7439@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Geometric Methods for the Approximation of High-dimensional Dynamical Systems
I will discuss a geometry-based statistical learning framework for performing model reduction and modeling of stochastic high-dimensional dynamical systems. I will consider two complementary settings. In the first one, I am given long trajectories of a system, e.g. from molecular dynamics, and I discuss techniques for estimating, in a robust fashion, an effective number of degrees of freedom of the system, which may vary in the state space of then system, and a local scale where the dynamics is well-approximated by a reduced dynamics with a small number of degrees of freedom. I will then use these ideas to produce an approximation to the generator of the system and obtain, via eigenfunctions of an empirical Fokker-Planck question, reaction coordinates for the system that capture the large time behavior of the dynamics. I will present various examples from molecular dynamics illustrating these ideas. In the second setting I assume I only have access to a (large number of expensive) simulators that can return short simulations of high-dimensional stochastic system, and introduce a novel statistical learning framework for learning automatically a family of local approximations to the system, that can be (automatically) pieced together to form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate (in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system) not only at small time scales, but also at large time scales, under suitable assumptions on the dynamics. I discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems in high-dimensions, that are well-approximated by stochastic differential equations.
No knowledge of molecular dynamics is required, and the techniques above are quite universal. Ideas in the first part of the talk are based on what is called Diffusion Geometry, and have been used widely in data analysis; ideas in the second part are applicable to MCMC. The talk will be accessible to students with a wide variety of backgrounds and interests.
20161005T150000
20161005T160000
0
Data Seminar: Mauro Maggioni (JHU) @ Krieger 309
free
ai1ec-7124@engineering.jhu.edu/ams
20180716T042154Z
Title:
Stochastic Search Methods for Simulation Optimization
Abstract:
A variety of systems arising in finance, engineering design, and manufacturing require the use of optimization techniques to improve their performance. Due to the complexity and stochastic dynamics of such systems, their performance evaluation frequently requires computer simulation, which however often lacks structure needed by classical optimization methods. We developed a gradient-based stochastic search approach, based on the idea of converting the original (structure-lacking) problem to a differentiable optimization problem on the parameter space of a sampling distribution that guides the search. A two-timescale updating scheme is further studied and incorporated to improve the algorithm efficiency. Convergence properties of our approach are established through techniques from stochastic approximation, and the performance of our algorithms is illustrated in comparison with some state-of-the-art simulation optimization methods. This is a joint work with Jiaqiao Hu (Stony Brook University) and Shalabh Bhartnagar (Indian Institute of Science).
Biography:
Enlu Zhou is currently an associate professor in the H. Milton School of Industrial & Systems Engineering at Georgia Institute of Technology. Prior to joining Georgia Tech in 2013, she was an assistant professor in the Industrial & Enterprise Systems Engineering Department at the University of Illinois Urbana-Champaign from 2009-2013. She received the B.S. degree with highest honors in electrical engineering from Zhejiang University, China, in 2004, and the Ph.D. degree in electrical engineering from the University of Maryland, College Park, in 2009. Her research interests include stochastic control, simulation optimization, and Monte Carlo statistical methods. She is a recipient of the “Best Theoretical Paper” award at the Winter Simulation Conference in 2009, AFOSR Young Investigator award in 2012, and NSF CAREER award in 2015.
20161006T133000
20161006T143000
0
Seminar: Enlu Zhou (Georgia Tech) @ Whitehead 304
free
ai1ec-7463@engineering.jhu.edu/ams
20180716T042154Z
20161010T190000
20161010T213000
0
HUSAM Event: Global Hedge Fund Marshall Wace Recruiting Event
free
thumbnail;http://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2016/10/Husam-300x232.jpg;730;565,medium;http://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2016/10/Husam-300x232.jpg;730;565,large;http://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2016/10/Husam-300x232.jpg;730;565,full;http://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2016/10/Husam-300x232.jpg;730;565
ai1ec-7296@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Modeling the dynamics of interacting particles by means of stochastic networks
Material science have been rapidly developing in recent years. A variety of particles interacting according to different kinds of pair potentials has been produced in experimental works. Looking into the future, one can imagine controlled self-assembly of particles into clusters of desired structures leading to the creation of new types of materials. Analytical studies of the self-assembly involve coping with difficulties associated with the huge numbers configurations, high dimensionality, complex geometry, and unacceptably large CPU times. A feasible approach to the study of self-assembly consists of mapping the collections of clusters onto stochastic networks (continuous-time Markov chains) and analyzing their dynamics. Vertices of the networks represent local minima of the potential energy of the clusters, while arcs connect only those pairs of vertices that correspond to local minima between which direct transitions are physically possible. Transition rates along the arcs are the transition rates between the corresponding pairs of local minima. Such networks are mathematically tractable and, at the same time, preserve important features of the underlying dynamics. Nevertheless, their huge size and complexity render their analysis challenging and invoke the development of new mathematical techniques. I will discuss some approaches to construction and analysis of such networks.
20161012T150000
20161012T160000
0
Data Seminar: Maria Cameron (University of Maryland College Park) @ Krieger 309
free
ai1ec-7212@engineering.jhu.edu/ams
20180716T042154Z
Title:
Leveraged Funds: Robust Replication and Performance Evaluation
Abstract:
Leveraged and inverse ETFs seek a daily return equal to a multiple of an index’ return, an objective that requires continuous portfolio rebalancing. The resulting trading costs create a tradeoff between tracking error, which controls the short-term correlation with the index, and excess return (or tracking difference) – the long-term deviation from the leveraged index’ performance. With proportional trading costs, the optimal replication policy is robust to the index’ dynamics. A summary of a fund’s performance is the implied spread, equal to the product of tracking error and excess return, rescaled for leverage and average volatility. The implied spread is insensitive to the benchmark’s risk premium and offers a tool to compare the performance of funds tracking the same index with different factors and tracking errors.
http://ssrn.com/abstract=2839852
20161013T133000
20161013T143000
0
Seminar: Paolo Guasoni (Boston University) @ Whitehead 304
free
ai1ec-7300@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20161026T150000
20161026T160000
0
Data Seminar: Robert Pego (Carnegie Mellon University) @ Krieger 309
free
ai1ec-6963@engineering.jhu.edu/ams
20180716T042154Z
Adaptive Contrast Weighted Learning and Tree-based Reinforcement Learning for Multi-Stage Multi-Treatment Decision-Making
Dynamic treatment regimes (DTRs) are sequential decision rules that focus simultaneously on treatment individualization and adaptation over time. We develop robust and flexible semiparametric and machine learning methods for estimating optimal DTRs. In this talk, we present a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. ACWL can handle multiple treatments at each stage and does not require prespecifying candidate DTRs. At each stage, we develop robust semiparametric regression-based contrasts with the adaptation of treatment effect ordering for each patient, and the adaptive contrasts simplify the problem of optimization with multiple treatment comparisons to a weighted classification problem that can be solved with existing machine learning techniques. We further develop a tree-based reinforcement learning (T-RL) method to directly estimate optimal DTRs in a multi-stage multi-treatment setting. At each stage, T-RL builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through the purity measure constructed with augmented inverse probability weighted estimators. By combining robust semiparametric regression with flexible tree-based learning, T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust to tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. We illustrate the performances of both methods in simulations and case studies.
20161027T133000
20161027T143000
0
Seminar: Lu Wang (University of Michigan) @ Whitehead 304
free
ai1ec-7304@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Variational problems on graphs and their continuum limits
We will discuss variational problems arising in machine learning and their limits as the number of data points goes to infinity. Consider point clouds obtained as random samples of an underlying “ground-truth” measure. Graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points. Many machine learning tasks, such as clustering and classification, can be posed as minimizing functionals on such graphs. We consider functionals involving graph cuts and graph laplacians and their limits as the number of data points goes to infinity. In particular we establish under what conditions the minimizers of discrete problems have a well defined continuum limit, and characterize the limit. The talk is primarily based on joint work with Nicolas Garcia Trillos, as well as on works with Xavier Bresson, Moritz Gerlach, Matthias Hein, Thomas Laurent, James von Brecht and Matt Thorpe.
20161102T150000
20161102T160000
0
Data Seminar: Dejan Slepcev (Carnegie Mellon University) @ Krieger 309
free
ai1ec-6965@engineering.jhu.edu/ams
20180716T042154Z
An Introduction to Distance Preserving Projections of Smooth Manifolds
Manifold-based image models are assumed in many engineering applications involving imaging and image classification. In the setting of image classification, in particular, proposed designs for small and cheap cameras motivate compressive imaging applications involving manifolds. Interesting mathematics results when one considers that the problem one needs to solve in this setting ultimately involves questions concerning how well one can embed a low-dimensional smooth sub-manifold of high-dimensional Euclidean space into a much lower dimensional space without knowing any of its detailed structure. We will motivate this problem and discuss how one might accomplish this seemingly difficult task using random projections. Little if any prerequisites will be assumed beyond linear algebra and some probability.
20161103T133000
20161103T143000
0
Seminar: Mark Iwen (Michigan State University) @ Whitehead 304
free
ai1ec-7308@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
Scalable Information Inequalities for Uncertainty Quantification in high dimensional probabilistic models
In this this talk we discuss new scalable information bounds for quantities of interest of complex stochastic models. The scalability of inequalities allows us to (a) obtain uncertainty quantification bounds for quantities of interest in high-dimensional systems and/or for long time stochastic dynamics; (b) assess the impact of large model perturbations such as in nonlinear response regimes in statistical mechanics; (c) address model-form uncertainty, i.e. compare different extended probabilistic models and corresponding quantities of interest. We demonstrate these tools in fast sensitivity screening of chemical reaction networks with a very large number of parameters, and towards obtaining robust and tight uncertainty quantification bounds for phase diagrams in statistical mechanics models.
20161109T150000
20161109T160000
0
Data Seminar: Markos Katsoulakis (University of Massachusetts Amherst) @ Krieger 309
free
ai1ec-8252@engineering.jhu.edu/ams
20180716T042154Z
husam_flyer
20161114T193000
20161114T213000
0
HUSAM Presents: The Many Faces of Logic @ Mergenthaler 111
free
ai1ec-6997@engineering.jhu.edu/ams
20180716T042154Z
Slipping Through the Cracks: Detecting Manipulation in Regional Commodity Markets
Reid B. Stevens[1] and Jeffery Y. Zhang[2]
Between 2010 and 2014, the regional price of aluminum in the United States (Midwest premium) increased 400 percent. We argue that the Midwest premium was likely manipulated during this period through the exercise of market power in the aluminum storage market. We first use a difference-in-differences model to show that there was a statistically significant increase of $0.07 per pound in the regional price of aluminum relative to the regional price of a production complement, copper. We then use several instrumental variables to show that this increase was driven by a single financial company’s accumulation of an unprecedented level of aluminum inventories in Detroit. Since this scheme targeted the regional price of aluminum, regulators who monitored only spot and futures prices would not have noticed anything peculiar. We therefore present an algorithm for real-time detection of similar manipulation schemes in regional commodity markets. The algorithm confirms the existence of a structural break in the U.S. aluminum market in late 2011. Using the algorithm, regulators could have detected the scheme as early as December 2012, more than six months before it was publicized by an article in The New York Times. We also apply the algorithm to another suspected case of regional price manipulation in the European aluminum market and find a similar break in 2011, suggesting the scheme may have been implemented beyond the United States.
[1] Department of Agriculture Economics, Texas A&M University, stevens@tamu.edu
[2] Department of Economics, Yale University and Harvard Law School, jeffery.zhang@yale.edu
20161117T133000
20161117T143000
0
Seminar: Reid Stevens (Texas A&M University) @ Whitehead 304
free
ai1ec-7312@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20161130T150000
20161130T160000
0
Data Seminar: Youssef Marzouk (Massachusetts Institute of Technology) @ Krieger 309
free
ai1ec-8216@engineering.jhu.edu/ams
20180716T042154Z
Title: Spectral Clustering for Dynamic Stochastic Block Model
Abstract: One of the most common and crucial aspects of many network data sets is the dependence of network link structure on time. In this work, we extend the existing (static) nonparametric latent variable model in the context of time-varying networks, and thereby propose a class of dynamic network models. For some special cases of these models (namely the dynamic stochastic block model and dynamic degree corrected block model), which assume that there is a common clustering structure for all networks, we consider the problem of identifying the common clustering structure. We propose two extensions of the (standard) spectral clustering method for the dynamic network models, and give theoretical guarantee that the spectral clustering methods produce consistent community detection in case of both dynamic stochastic block model and dynamic degree-corrected block model. The methods are shown to work under sufficiently mild conditions on the number of time snapshots to detect both associative and dissociative community structure, even if all the individual networks are very sparse and most of the individual networks are below community detectability threshold. We reinforce the validity of the theoretical results via simulations too.
(Joint work with Shirshendu Chatterjee, CUNY)
20161201T133000
20161201T143000
0
Seminar: Sharmodeep Bhattacharyya (Oregon State University) @ Whitehead 304
free
ai1ec-7316@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20161207T150000
20161207T160000
0
Data Seminar: Ben Adcock (Simons Fraser University) @ Krieger 309
free
ai1ec-8240@engineering.jhu.edu/ams
20180716T042154Z
Title: Online and Random-order Load Balancing Simultaneously
Abstract: We consider the problem of online load balancing under lp-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the lp-norm of the machine loads. This generalizes the classical problem of scheduling for makespan minimization (case l_infty) and has been thoroughly studied. We provide algorithms with simultaneously optimal* guarantees for the worst-case model as well as for the random-order (i.e. secretary) model, where an arbitrary set of jobs comes in random order.
One of the main components is a new algorithm with improved regret for Online Linear Optimization (OLO) over the non-negative vectors in the lq ball. Interestingly, this OLO algorithm is also used to prove a purely probabilistic inequality that controls the correlations arising in the random-order model, a common source of difficulty for the analysis. A property that drives both our load balancing algorithms and our OLO algorithm is a smoothing of the the lp-norm that may be of independent interest.
20161208T133000
20161208T143000
0
Seminar: Marco Molinaro (Pontifical Catholic University of Rio de Janeiro) @ Whitehead 304
free
ai1ec-8220@engineering.jhu.edu/ams
20180716T042154Z
Spatial-temporal modeling of the association between air pollution exposures and birth outcomes: identifying critical exposure windows
Exposure to high levels of air pollution during pregnancy has been linked to increased probability of adverse birth outcomes. We consider statistical models for evaluating associations between pollutants and birth outcomes, taking into account multipollutant exposures, susceptible windows in pregnancy, and variability in exposure over space and time. We consider geocoded vital records data from Texas as well as data from the National Birth Defects Prevention Study.
20170202T133000
20170202T143000
0
Wierman Lecture Series: Amy Herring (University of North Carolina Chapel Hill) @ Hodson 110
free
ai1ec-9274@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20170208T150000
20170208T160000
0
Data Seminar: Kasso Okoudjou (University of Maryland, College Park) @ Whitehead 304
free
ai1ec-7404@engineering.jhu.edu/ams
20180716T042154Z
Title: Geometry, Shapes and PDEs
Abstract:
The interest in right invariant metrics on the diffeomorphism group is fueled by its relations to hydrodynamics. Arnold noted in 1966 that Euler’s equations, which govern the motion of ideal, incompressible fluids, can be interpreted as geodesic equations on the group of volume preserving diffeomorphisms with respect to a suitable Riemannian metric. Since then other PDEs arising in physics have been interpreted as geodesic equations on manifold of mappings. Examples include Burgers’ equation, the KdV and Camassa-Holm equations or the Hunter-Saxton equation.
Another important motivation for the study of Riemannian metrics on manifold of mappings can be found in its appearance in the field of shape analysis and in particular in the eminent role of the diffeomorphism group in computational anatomy: the space of medical images is acted upon by the diffeomorphism group and differences between images are encoded by diffeomorphisms in the spirit of Grenander’s pattern theory. The study of anatomical shapes can be thus reduced to the study of the diffeomorphism group.
Using these observations as a starting point, I will consider Riemannian metrics on spaces of mappings. I will discuss the local and global well-posedness of the corresponding geodesic equation, study the induced geodesic distance and present selected numerical examples of minimizing geodesics.
20170209T133000
20170209T143000
0
Seminar: Martin Bauer (Florida State University) @ Whitehead 304
free
ai1ec-9278@engineering.jhu.edu/ams
20180716T042154Z
20170215T150000
20170215T160000
0
Data Seminar: Jerome Darbon (Brown University) @ Whitehead 304
free
ai1ec-8440@engineering.jhu.edu/ams
20180716T042154Z
TITLE:
A local limit theorem for QuickSort key comparisons via multi-round smoothing
ABSTRACT:
It is a well-known result, due independently to Régnier (1989) and Rösler (1991), that the number of key comparisons required by the randomized sorting algorithm QuickSort to sort a list of n distinct items (keys) satisfies a global distributional limit theorem. We resolve an open problem of Fill and Janson from 2002 by using a multi-round smoothing technique to establish the corresponding local limit theorem. (in plain text; note that only the “n” in “sort a list of n distinct items” would be set in math mode in LaTeX)
This is joint work with Béla Bollobás and Oliver Riordan.
20170216T133000
20170216T143000
0
Seminar: Jim Fill (JHU) @ Whitehead 304
free
ai1ec-8420@engineering.jhu.edu/ams
20180716T042154Z
20170223T133000
20170223T143000
0
Seminar: Laurent Younes (JHU) @ Whitehead 304
free
ai1ec-9442@engineering.jhu.edu/ams
20180716T042154Z
Link to the slides from Tom Loredo’s seminar- JHU17-HierBayesCosmicPopns
20170302T133000
20170302T143000
0
Seminar: Tom Loredo (Cornell University) @ Whitehead 304
free
ai1ec-9470@engineering.jhu.edu/ams
20180716T042154Z
20170307T130000
20170307T140000
Hodson 3rd Floor Lobby
0
HUSAM Presents Coffee Chat
free
thumbnail;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,medium;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,large;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,full;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139
ai1ec-9286@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20170308T150000
20170308T160000
0
Data Seminar: Matthew Hirn (Michigan State University) @ Whitehead 304
free
ai1ec-9262@engineering.jhu.edu/ams
20180716T042154Z
Title: Mean Field Games: theory and applications
Abstract: We review the Mean Field Game paradigm introduced independently by Caines-Huang-Malhame and Lasry-Lyons ten years ago, and we illustrate their relevance to applications with a few practical of examples (bird flocking, room exit, systemic risk, cyber-security, …. ). We then review the probabilistic approach based on Forward-Backward Stochastic Differential Equations, and we derive the Master Equation from a version of the chain rule (Ito’s formula) for functions over flows of probability measures. Finally, motivated by the literature on economic models of bank runs, we introduce mean field games of timing and discuss new results, and some of the many remaining challenges.
20170309T133000
20170309T143000
0
Duncan Lecture Series: Rene Carmona (Princeton) @ Krieger 205
free
ai1ec-9266@engineering.jhu.edu/ams
20180716T042154Z
Title: Mean Field Games with Major and Minor Players: Theory and Numerics.
Abstract: We present a (possibly) new formulation of the mean field game problem in the presence of major and minor players, and give new existence results for linear quadratic models and models with finite state spaces. We shall also provide numerical results illustrating the theory and raising new challenging open problems.
20170310T133000
20170310T143000
0
Duncan Lecture Series: Rene Carmona (Princeton) @ Krieger 205
free
ai1ec-9483@engineering.jhu.edu/ams
20180716T042154Z
HUSAM is hosting a professional event with Deloitte. The event will be an overview of consulting at Deloitte. A panel of Deloitte practitioners will present on Deloitte’s BTA consulting track, health analytics, and answer questions.
20170313T183000
Arellano Theater in Levering Hall
0
HUSAM Deloitte Event
free
1
ai1ec-9258@engineering.jhu.edu/ams
20180716T042154Z
Title: Nuke the Clouds: Using nuclear norm optimization to remove clouds from satellite images
Abstract: We discuss how to use the nuclear norm and matrix factorization techniques to remove clouds from satellite images. The talk will focus on discussing the key properties and variational inequalities that is commonly used in minimizing convex functions with a nuclear norm term. We will also contrast the convex formulations with the corresponding rank constrained problems that are highly non-convex, but which are sometimes simpler to solve regardless. Finally we will show a lot of examples/demos of how this is working in practice.
20170316T133000
20170316T143000
0
Seminar: Peder Olsen (IBM) @ Whitehead 304
free
ai1ec-9290@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20170329T150000
20170329T160000
0
Data Seminar: Jason Eisner (JHU) @ Whitehead 304
free
ai1ec-8224@engineering.jhu.edu/ams
20180716T042154Z
Heuristics for Network Revenue Management
We consider a network revenue management problem with customer choice and exogenous prices. Such problems are central in several applications including airline ticket pricing. Given the infeasibility of explicitly finding optimal policies, we study the performance of a class of heuristic policies. These heuristics periodically re-solve the deterministic linear program (DLP) that results when all future random variables are replaced by their average values and implement the solutions in a probabilistic manner. We provide an upper bound for the expected revenue loss under such policies when compared to the optimal policy. Using this bound, we construct a schedule of re-solving times such that the resulting expected revenue loss is bounded by a constant that is independent of the size of the problem.
Joint work with Stefanus Jasin at University of Michigan.
20170330T133000
20170330T143000
0
Seminar: Sunil Kumar (JHU) @ Whitehead 304
free
ai1ec-7408@engineering.jhu.edu/ams
20180716T042154Z
Energy Prices & Dynamic Games with Stochastic Demand
The dramatic decline in oil prices, from around $110 per barrel in June 2014 to around $30 in January 2016 highlights the importance of competition between different energy producers. Indeed, the price drop has been primarily attributed to OPEC’s strategic decision (until very recently) not to curb its oil production in the face of increased supply of shale gas and oil in the US, which was spurred by the development of fracking technology. Most dynamic Cournot models focus on supply-side factors, such as increased shale oil, and random discoveries. However declining and uncertain demand from China is a major factor driving oil price volatility. We study Cournot games in a stochastic demand environment, and present asymptotic and numerical results, as well as a modified Hotelling’s rule for games with stochastic demand.
20170403T103000
20170403T113000
0
Seminar: Ronnie Sircar (Princeton University) @ Ames 234
free
ai1ec-9310@engineering.jhu.edu/ams
20180716T042154Z
20170405T150000
20170405T160000
0
Data Seminar: Afonso Bandeira (NYU) @ Whitehead 304
free
ai1ec-9478@engineering.jhu.edu/ams
20180716T042154Z
20170405T150000
20170405T160000
0
HUSAM Presents Coffee Chat
free
thumbnail;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,medium;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,large;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139,full;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2017/03/Coffee_Chat-002-300x139.jpg;300;139
ai1ec-9302@engineering.jhu.edu/ams
20180716T042154Z
20170412T150000
20170412T160000
0
Data Seminar: Andrew Christlieb (Michigan State University) @ Whitehead 304
free
ai1ec-9366@engineering.jhu.edu/ams
20180716T042154Z
Reciprocal Graphical Models for Integrative Gene Regulatory Network Analysis
Constructing gene regulatory networks is a fundamental task in systems biology. We introduce a Gaussian reciprocal graphical model for inference about gene regulatory relationships by integrating mRNA gene expression and DNA level information including copy number and methylation. Data integration allows for inference on the directionality of certain regulatory relationships, which would be otherwise indistinguishable due to Markov equivalence. Efficient inference is developed based on simultaneous equation models. Bayesian model selection techniques are adopted to estimate the graph structure. We illustrate our approach by simulations and two applications in ZODIAC pairwise gene interaction analysis and colon adenocarcinoma pathway analysis.
20170413T133000
20170413T143000
0
Seminar: Peter Mueller (University of Texas) @ Whitehead 304
free
ai1ec-8444@engineering.jhu.edu/ams
20180716T042154Z
From solving PDEs to machine learning PDEs:
An odyssey in computational mathematics
George Em Karniadakis
Division of Applied Mathematics, Brown University
Abstract: In the last 30 years I have pursued the numerical solution of partial differential equations (PDEs) using spectral and spectral elements methods for diverse applications, starting from deterministic PDEs in complex geometries, to stochastic PDEs for uncertainty quantification, and to fractional PDEs that describe non-local behavior in disordered media and viscoelastic materials. More recently, I have been working on solving PDEs in a fundamentally different way. I will present a new paradigm in solving linear and nonlinear PDEs from noisy measurements without the use of the classical numerical discretization. Instead, we infer the solution of PDEs from noisy data, which can represent measurements of variable fidelity. The key idea is to encode the structure of the PDE into prior distributions and train Bayesian nonparametric regression models on available noisy data. The resulting posterior distributions can be used to predict the PDE solution with quantified uncertainty, efficiently identify extrema via Bayesian optimization, and acquire new data via active learning. Moreover, I will present how we can use this new framework to learn PDEs from noisy measurements of the solution and the forcing terms.
Bio: George Karniadakis received his S.M. and Ph.D. from Massachusetts Institute of Technology. He was appointed Lecturer in the Department of Mechanical Engineering at MIT in 1987 and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continues to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is a Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the Ralf E Kleinman award from SIAM (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 79 and he has been cited over 32,500 times.
20170420T133000
20170420T143000
0
Seminar: George Karniadakis (Brown University) @ Whitehead 304
free
ai1ec-9298@engineering.jhu.edu/ams
20180716T042154Z
Solving Fredholm integrals from incomplete measurements
We present an algorithm to solve Fredholm integrals of the first kind with tensor product structures, from a limited number of measurements with the goal of using this method to accelerate Nuclear Magnetic Resonance (NMR) acquisition. This is done by incorporating compressive sampling type arguments to fill in the missing measurements using a priori knowledge of the structure of the data. In the first step, we recover a compressed data matrix from measurements that form a tight frame, and establish that these measurements satisfy the restricted isometry property (RIP). In the second step, we solve the zeroth-order regularization minimization problem using the Venkataramanan-Song-Huerlimann algorithm. We demonstrate the performance of this algorithm on simulated and real data and we compare it with other sampling techniques. Our theory applied to both 2D and multidimensional NMR.
20170426T150000
20170426T160000
0
Data Seminar: Wojciech Czaja (University of Maryland, College Park) @ Whitehead 304
free
ai1ec-8416@engineering.jhu.edu/ams
20180716T042154Z
Title: Parametrization of discrete optimization problems, subdeterminants and matrix-decomposition
Abstract:
The central goal of this talk is to identify parameters that explain the complexity of Integer linear programming defined as follows:
Let P be a polyhedron. Determine an integral point in P that maximizes a linear function.
It is obvious that the number of integer variables is such a parameter.
However, in view of applications in very high dimensions, the question emerges whether we need to treat all variables as integers? In other words, can we reparametrize integer programs with significantly less integer variables?
A second much less obvious parameter associated with an integer linear program is the number Delta defined as the maximum absolute value of any square submatrix of a given integral matrix A with m rows and n columns.
This leads us to the important open question whether we can solve integer linear programming in a polynomial running time in Delta and the instance size?
Regarding the first question, we exhibit a variety of examples that demonstrate how integer programs can be reformulated using much less integer variables. To this end, we introduce a generalization of total unimodularity called the affine TU-dimension of a matrix and study related theory and algorithms for determining the affine TU-dimension of a matrix.
Regarding the second question,
we present several new results that illustrate why Delta is an important parameter about the complexity of integer linear programs associated with a given matrix A.
In particular, in the nondegenerate case integer linear programs with any constant value Delta can be solved in polynomial time.
This extends earlier results of Veselov and Chirkov.
20170427T133000
20170427T143000
0
Seminar: Robert Weismantel (ETH Zurich) @ Whitehead 304
free
ai1ec-9270@engineering.jhu.edu/ams
20180716T042154Z
Title: Vol, Skew, and Smile Trading
Abstract: We use dynamically traded portfolios of options to bet on either the quadratic variation of log price, or on the realized co-variation of log price with log implied vol, or on the quadratic variation of implied vol. Our bets lead to precise financial meanings for the level, slope, and curvature of implied variance in moneyness.
20170504T133000
20170504T143000
0
Seminar: Peter Carr (NYU) @ Whitehead 304
free
ai1ec-9294@engineering.jhu.edu/ams
20180716T042154Z
http://www.math.jhu.edu/~data/
20170510T150000
20170510T160000
0
Data Seminar: Rachel Ward (University of Texas, Austin) @ Whitehead 304
free
ai1ec-10353@engineering.jhu.edu/ams
20180716T042154Z
Title: Introduction to the Financial Mathematics Seminar
Abstract:
The seminar will have two parts:
Part I) Daniel Naiman will introduce the Financial Mathematics seminar.
Part II) Sonjala Williams will speak about networking and job search strategies.
20170905T133000
20170905T150000
Shaffer 101
0
Financial Mathematics Seminar: Daniel Naiman and Sonjala Williams (JHU) @ Shaffer 101
free
ai1ec-10398@engineering.jhu.edu/ams
20180716T042154Z
Dr. Dave Schrader will be speaking at Sports Analytics Club’s first meeting of the year on Monday, September 11 at 7pm in Schafer 3. Dr. Schrader is an expert in the field of sports analytics, and has a wide breadth of experience in industry and in speaking to college students about the field. Individuals with all levels of experience in sports analytics are welcome to attend, and the talk should give a good overview of how sports analytics are currently being used and how students can get involved themselves.
The Dr. Schrader’s talk is entitled “The Golden Age of Sports Analytics,” and it will cover the following topics:
What’s happening around the world to collect and analyze data for recruiting, player development, game planning, and injury prevention?
How are analytics being used to evaluate and improve business operations – ticket pricing, sales, sponsorships?
What analytics are the leading pro teams and leagues using for basketball, baseball, football, hockey, and soccer?
How quickly are analytics being adopted at the college level? Who is leading? What are they doing?
How can other parts of the university, like the business school or computer science departments, collaborate with sports programs to provide analytics for teams? What are good “Moneyball” projects to launch? What have other schools done?
Where can you get more information? What to read? What conferences to attend?
20170911T190000
20170911T203000
0
Dr. Dave Schrader at Sports Analytics Club Meeting @ 7pm in Schafer 3
free
ai1ec-10410@engineering.jhu.edu/ams
20180716T042154Z
Title: No equations, no variables, no parameters, no space, no time: Data and the computational modeling of complex/multiscale systems
Abstract: Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to equations for a model, and then to the analysis of the model to make predictions. Good mathematical models give good predictions (and inaccurate ones do not) – but the computational tools for analyzing them are the same: algorithms that are typically based on closed form equations. While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations -data-, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by “looking in a crystal ball”. Yet the “serious thinking” is still there and uses the same -and some new- mathematics: it goes into building algorithms that “jump directly” from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this “new” path from data to predictions. It really is the same old path, but it is travelled by new means.
Related papers:
Parsimonious Representation of Nonlinear Dynamical Systems through Manifold Learning: a Chemotaxis Case Study
An Equal Space for Complex Data with Unknown Internal Order: Observability, Gauge Invariance and Manifold Learning Kevrekidis
20170913T150000
20170913T160000
0
Data Science Seminar: Yannis Kevrekidis (JHU) @ Hodson 203
free
ai1ec-10449@engineering.jhu.edu/ams
20180716T042154Z
Symmetry, Temporal Information, and Succinct Representation of Random Graph Structures
I will discuss mathematical aspects of my recent work on two related problems at the intersection of random graphs and information theory: (i) node order inference – for a dynamic random graph model, determine the extent to which the order in which nodes arrived can be inferred from the graph structure, and (ii) source coding of structures – for a given graph model, exhibit an efficiently computable and invertible mapping from unlabeled graphs to bit strings with minimum possible expected code length. Both problems are connected to the study of the symmetries of the graph model, as well as another combinatorial quantity – the typical number of feasible labeled representatives of a given structure. I will focus on the case of the preferential attachment model, for which we are able to give a (nearly) complete characterization of the behavior of the size of the automorphism group, as well as a provably asymptotically optimal algorithm for (ii), and optimal estimators for certain natural formulations of (i).
20170914T133000
20170914T143000
0
AMS Seminar: Abram Magner (University of Illinois) @ Whitehead 304
free
ai1ec-10414@engineering.jhu.edu/ams
20180716T042154Z
Title: Semiparametric spectral modeling of the Drosophila connectome
Abstract: We present semiparametric spectral modeling of the complete larval Drosophila mushroom body connectome. Motivated by a thorough exploratory data analysis of the network via Gaussian mixture modeling (GMM) in the adjacency spectral embedding (ASE) representation space, we introduce the latent structure model (LSM) for network modeling and inference. LSM is a generalization of the stochastic block model (SBM) and a special case of the random dot product graph (RDPG) latent position model, and is amenable to semiparametric GMM in the ASE representation space. The resulting connectome code derived via semiparametric GMM composed with ASE captures latent connectome structure and elucidates biologically relevant neuronal properties.
Related papers:
The complete connectome of a learning and memory center in an insect brain
A consistent adjacency spectral embedding for stochastic blockmodel graphs
A limit theorem for scaled eigenvectors of random dot product graphs
Limit theorems for eigenvectors of the normalized Laplacian for random graphs
20170920T150000
20170920T160000
0
Data Science Seminar: Carey Priebe (JHU) @ Hodson 203
free
ai1ec-10460@engineering.jhu.edu/ams
20180716T042154Z
TITLE – On optimizing a submodular utility function
ABSTRACT – This talk has two related parts. Part one is on the maximization of a particular submodular utility function, whereas part two is on its minimization. Both problems arise naturally in combinatorial optimization with risk aversion, including estimation of project duration with stochastic task times, in reliability models, multinomial logit models, competitive facility location, combinatorial auctions, as well as in portfolio optimization.
Part 1: Given a monotone concave univariate function g, and two vectors c and d, we consider the discrete optimization problem of finding a vertex of a polytope maximizing the utility function c’x g(d’x). The problem is NP-hard for any strictly concave function g even for simple polytopes, such as the uniform matroid, assignment and path polytopes. We give a 1/2-approximation algorithm for it and improvements for special cases, where g is the square root, log utility, negative exponential utility and multinomial logit probability function. In particular, for the square root function, the approximation ratio improves to 4/5. Although the worst case bounds are tight, computational experiments indicate that the approximation algorithm finds solutions within 1-2% optimality gap for most of the instances very quickly and can be considerably faster than the existing alternatives.
Part 2: We consider a mixed 0-1 conic quadratic optimization problem with indicator variables arising in mean-risk optimization. The indicator variables are often used to model non-convexities such as fixed charges or cardinality constraints. Observing that the problem reduces to a submodular function minimization for its binary restriction, we derive three classes of strong convex valid inequalities by lifting the polymatroid inequalities on the binary variables. Computational experiments demonstrate the effectiveness of the inequalities in strengthening the convex relaxations and, thereby, improving the solution times for mean-risk problems with fixed charges and cardinality constraints significantly.
20170921T133000
20170921T143000
0
AMS Seminar: Alper Atamturk (Berkeley University) @ Whitehead 304
free
ai1ec-10417@engineering.jhu.edu/ams
20180716T042154Z
Title: Frames — two case studies: ambiguity and uncertainty
Abstract: The theory of frames is an essential concept for dealing with signal representation in noisy environments. We shall examine the theory in the settings of the narrow band ambiguity function and of quantum information theory. For the ambiguity function, best possible estimates are derived for applicable constant amplitude zero autocorrelation (CAZAC) sequences using Weil’s solution of the Riemann hypothesis for finite fields. In extending the theory to the vector-valued case modelling multi-sensor environments, the definition of the ambiguity function is characterized by means of group frames. For the uncertainty principle, Andrew Gleason’s measure theoretic theorem, establishing the transition from the lattice interpretation of quantum mechanics to Born’s probabilistic interpretation, is generalized in terms of frames to deal with uncertainty principle inequalities beyond Heisenberg’s. My collaborators are Travis Andrews, Robert Benedetto, Jeffrey Donatelli, Paul Koprowski, and Joseph Woodworth.
Related papers:
Super-resolution by means of Beurling minimal extrapolation
Generalized Fourier frames in terms of balayage
Uncertainty principles and weighted norm inequalities
A frame reconstruction algorithm with applications to magnetric resonance imaging
Frame multiplication theory and a vector-valued DFT and ambiguity functions
20170927T150000
20170927T160000
0
Data Science Seminar: John Benedetto (University of Maryland College Park and Norbert Wiener Center) @ Hodson 203
free
ai1ec-10477@engineering.jhu.edu/ams
20180716T042154Z
Title: An improved approach to calibrating misspecified mathematical models
Abstract: We consider the problem of calibrating misspecified mathematical models using experimental data. To compensate for the misspecification of the model, a discrepancy function is usually included and modeled via a Gaussian stochastic process (GaSP), leading to better results of prediction. The calibration parameters in the model, however, sometimes become unidentifiable and the calibrated model fits the experimental data poorly as a consequence. In this work, we propose the scaled Gaussian stochastic process (S-GaSP), a novel stochastic process for calibration and prediction. This new approach bridges the gap between two predominant methods, namely the $L_2$ calibration and GaSP calibration. A computationally feasible approach is introduced for this new model under the Bayesian paradigm. The S-GaSP model not only provides a general framework for calibration, but also enables the calibrated mathematical model to predict well regardless of the discrepancy function. Simulation examples are provided and real examples using satellite images to calibrate the model for volcanic hazard are studied to illustrate the connections and differences between this new model and other previous approaches.
20170928T133000
20170928T143000
0
AMS Seminar: Mengyang Gu (JHU) @ Whitehead 304
free
ai1ec-10421@engineering.jhu.edu/ams
20180716T042154Z
Title: Data-driven discovery of governing equations and physical laws
Abstract: The emergence of data methods for the sciences in the last decade has been enabled by the plummeting costs of sensors, computational power, and data storage. Such vast quantities of data afford us new opportunities for data-driven discovery, which has been referred to as the 4th paradigm of scientific discovery. We demonstrate that we can use emerging, large-scale time-series data from modern sensors to directly construct, in an adaptive manner, governing equations, even nonlinear dynamics, that best model the system measured using modern regression techniques. Recent innovations also allow for handling multi-scale physics phenomenon and control protocols in an adaptive and robust way. The overall architecture is equation-free in that the dynamics and control protocols are discovered directly from data acquired from sensors. The theory developed is demonstrated on a number of canonical example problems from physics, biology and engineering.
Related papers:
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
Data-driven discovery of partial differential equations
Chaos as an intermittently forced linear system
20171004T150000
20171004T160000
0
Data Science Seminar: Nathan Kutz (University of Washington) @ Hodson 203
free
ai1ec-10554@engineering.jhu.edu/ams
20180716T042154Z
Title: Multidimensional wavelet signal denoising via adaptive random partitioning
Abstract: Traditional statistical wavelet analysis usually focuses on modeling the wavelet coefficients under a given, predetermined wavelet transform. Such analysis may quickly lose efficiency in multivariate problems under traditional multivariate wavelet transforms, which are symmetric with respect to the dimensions, as predetermined wavelet transforms cannot adaptively exploit the energy distribution in a problem-specific manner. We introduce a principled probabilistic framework for incorporating such adaptivity by (i) representing multivariate functions using one-dimensional (1D) wavelet transforms applied to a permuted version of the original function, and (ii) placing a hyperprior on the corresponding permutation. Such a representation can achieve substantially better energy concentration in the wavelet coefficients and highly scalable inference algorithms. In particular, when combined with the Haar basis, we obtain the exact Bayesian inference analytically through a recursive message passing algorithm with a computational complexity that scales linearly with sample size. In addition, we propose a sequential Monte Carlo (SMC) inference algorithm for other wavelet bases using the exact Haar solution as the proposal. We demonstrate that with this framework even simple 1D Haar wavelets can achieve excellent performance in both 2D and 3D image reconstruction via numerical experiments, outperforming state-of-the-art multidimensional wavelet-based methods especially in low signal-to-noise ratio settings, at a fraction of the computational cost.
This is a joint work with Li Ma.
20171005T133000
20171005T143000
0
AMS Seminar: Meng Li (Rice University) @ Whitehead 304
free
ai1ec-10566@engineering.jhu.edu/ams
20180716T042154Z
Title: Discrete Nonlinear Optimization by State-Space Decompositions
Abstract: In this talk we will discuss a decomposition approach for binary optimization problems with nonlinear objectives and linear constraints. Our methodology relies on the partition of the objective function into separate low-dimensional dynamic programming (DP) models, each of which can be equivalently represented as a shortest-path problem in an underlying state transition graph. We show that the associated transition graphs can be related by a mixed-integer linear program (MILP) so as to produce exact solutions to the original nonlinear problem. To address DPs with large state spaces, we present a general relaxation mechanism which dynamically aggregates states during the construction of the transition graphs. The resulting MILP provides both lower and upper bounds to the nonlinear function, and may be embedded in branch-and-bound procedures to find provably optimal solutions. We describe how to specialize our technique for structured objectives (e.g., submodular functions) and consider three problems arising in revenue management, portfolio optimization, and healthcare. Numerical studies indicate that the proposed technique often outperforms state-of-the-art approaches by orders of magnitude in these applications.
20171006T100000
20171006T110000
0
Additional AMS Seminar: David Bergman (UConn) @ Whitehead 304
free
ai1ec-10669@engineering.jhu.edu/ams
20180716T042154Z
Title: Discussion of quantitative careers in private banking
Abstract: TBA
20171010T133000
20171010T143000
0
Financial Mathematics Seminar: David Stack, the VP from USB Baltimore
free
ai1ec-10425@engineering.jhu.edu/ams
20180716T042154Z
Title: Data assimilation with stochastic model reduction
Abstract: In weather and climate prediction, data assimilation combines data with dynamical models to make prediction, using ensemble of solutions to represent the uncertainty. Due to limited computational resources, reduced models are needed and coarse-grid models are often used, and the effects of the subgrid scales are left to be taken into account. A major challenge is to account for the memory effects due to coarse graining while capturing the key statistical-dynamical properties. We propose to use nonlinear autoregression moving average (NARMA) type models to account for the memory effects, and demonstrate by examples that the resulting NARMA type stochastic reduced models can capture the key statistical and dynamical properties and therefore can improve the performance of ensemble prediction in data assimilation. The examples include the Lorenz 96 system (which is a simplified model of the atmosphere) and the Kuramoto-Sivashinsky equation of spatiotemporally chaotic dynamics.
Related papers:
Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics
Accounting for model error from unresolved scales in ensemble Kalman filters by stochastic parametrization
20171011T150000
20171011T160000
0
Data Science Seminar: Fei Lu (JHU) @ Hodson 203
free
ai1ec-10674@engineering.jhu.edu/ams
20180716T042154Z
Title: Financial Contagion and Systemic Risk
Abstract: Financial contagion occurs when the distress of one bank jeopardizes the health of other financial firms, and can ultimately spread to the real economy. The spread of defaults in the financial system can occur due to both local connections, e.g., contractual obligations, and global connections, e.g., through the prices of assets due to mark-to-market valuation. As evidenced by the 2007-2009 financial crisis, the cost of a systemic event is tremendous, thus requiring a detailed look at the contributing factors. In this talk, we will detail the local contagion model of Eisenberg and Noe (2001). However, in utilizing this model, central bankers and regulators often must estimate the interbank liabilities because complete information on bilateral obligations is rarely available. This estimation can introduce errors to the level of financial contagion and risk in the system. We will consider a sensitivity analysis of the Eisenberg-Noe model to determine the size of these potential estimation errors.
20171017T133000
20171017T150000
0
Financial Mathematics Seminar: Professor Zach Feinstein (Washington University in St.)
free
ai1ec-10431@engineering.jhu.edu/ams
20180716T042154Z
Title: “Hyper-Molecules” in Cryo-Electron Microscopy (cryo-EM)
Abstract: Cryo-EM is an imaging technology that is revolutionizing structural biology; the Nobel Prize in Chemistry 2017 was recently awarded to Jacques Dubochet, Joachim Frank and Richard Henderson “for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”. Cryo-electron microscopes produce a large number of very noisy two-dimensional projection images of individual frozen molecules. Unlike related tomography methods, such as computed tomography (CT), the viewing direction of each image is unknown. The unknown directions, together with extreme levels of noise and additional technical factors, make the determination of the structure of molecules challenging. Unlike other structure determination methods, such as x-ray crystallography and nuclear magnetic resonance (NMR), cryo-EM produces measurements of individual molecules and not ensembles of molecules. Therefore, cryo-EM could potentially be used to study mixtures of different conformations of molecules. While current algorithms have been very successful at analyzing homogeneous samples, and can recover some distinct conformations mixed in solutions, the determination of multiple conformations, and in particular, continuums of similar conformations (continuous heterogeneity), remains one of the open problems in cryo-EM. I will discuss the “hyper-molecules” approach to continuous heterogeneity, and the numerical tools and analysis methods that we are developing in order to recover such hyper-molecules.
20171018T150000
20171018T160000
0
Data Science Seminar: Roy Lederman (Princeton University) @ Hodson 203
free
ai1ec-10484@engineering.jhu.edu/ams
20180716T042154Z
Title: Optimization with Polyhedral Constraints
Abstract:
A two-phase strategy is presented for solving an optimization problem whose feasible set is a polyhedron. Phase one is the gradient projection algorithm, while phase two is essentially any algorithm for solving a linearly constrained optimization problem. Using some simple rules for branching between the two phases, it is shown, under suitable assumptions, that only the linearly constrained optimization algorithm is performed asymptotically. Hence, the asymptotic convergence behavior of the two phase algorithm coincides with the convergence behavior of the linearly constrained optimizer. Numerical results are presented using CUTE test problems, a recently developed algorithm for projecting a point onto a polyhedron, and the conjugate gradient algorithm for the linearly constrained optimizer.
20171019T133000
20171019T143000
0
AMS Seminar: Bill Hager (University of Florida) @ Whitehead 304
free
ai1ec-10432@engineering.jhu.edu/ams
20180716T042154Z
TBA
20171025T150000
20171025T160000
0
Data Science Seminar: John Harlim (Penn State University) @ Hodson 203
free
ai1ec-10487@engineering.jhu.edu/ams
20180716T042154Z
Title: Coherence in Statistical Modeling of Networks
Abstract:
George Box famously said, “All models are wrong, but some are useful.”
Classic texts define a statistical model as “a set of distributions on the sample space” (Cox and Hinkley, 1976; Lehman, 1983; Barndorff-Nielson and Cox, Bernardo and Smith, 1994).
Motivated by some longstanding questions in the analysis of network data, I will examine both of these statements, first from a general point of view, and then in the context of some recent developments in network analysis.
The confusion caused by these statements is clarified by the realization that the definition of statistical model must be refined — it must be more than just a set. With this, the ambiguity in Box’s statement — e.g., what determines whether a model is ‘wrong’ or ‘useful’? — can be clarified by a logical property that I call ‘coherence’. After clarification, a model is deemed useful as long as it is coherent, i.e., inferences from it ‘make sense’.
I will then discuss some implications for the statistical modeling of network data.
20171026T133000
20171026T143000
0
AMS Seminar: Harry Crane ( Rutgers University) @ Whitehead 304
free
ai1ec-10680@engineering.jhu.edu/ams
20180716T042154Z
Title: Functional central limit theorems for rough volatility models
Abstract: We extend Donsker’s approximation of Brownian motion to fractional Brownian motion with any Hurst exponent (including the ’rough’ case H < 1/2), and Volterra-like processes. Some of the most relevant consequences of our ‘rough Donsker (rDonsker) Theorem’ are convergence results (with rates) for discrete approximations of a large class of rough models. This justifies the validity of simple and easy-to-implement Monte-Carlo methods, for which we provide detailed numerical recipes. We test these against the current benchmark hybrid scheme of Bennedsen, Lunde, and Pakkanen and find remarkable agreement (for a large range of values of H). This rDonsker Theorem further provides a weak convergence proof for the hybrid scheme itself, and allows to construct binomial trees for rough volatility models, the first available scheme (in the rough volatility context) for early exercise options such as American or Bermudan. The talk is based on joint work with B. Horvath and A. Muguruza.
20171031T133000
20171031T150000
0
Financial Mathematics Seminar: Dr. Antoine Jacquier (Imperial College London)
free
ai1ec-10490@engineering.jhu.edu/ams
20180716T042154Z
Title: Some matrix problems in quantum information science
Abstract:
In this talk, we present some matrix results and techniques in
solving certain optimization problems arising in quantum information
science.
No quantum mechanics background is required.
20171101T133000
20171101T143000
0
AMS Seminar: Chi-Kwong Li (Williams and Mary University, IQC) @ Bloomberg 274
free
ai1ec-10686@engineering.jhu.edu/ams
20180716T042154Z
Financial Mathematics Seminar
Title: Data scientist at Kensho working focusing on natural language processing
20171107T133000
20171107T150000
0
Financial Mathematics Seminar: Ben Cohen
free
ai1ec-10441@engineering.jhu.edu/ams
20180716T042154Z
Title: Emergent behavior in self-organized dynamics: from consensus to hydrodynamic flocking
Abstract: We discuss several first- and second-order models encountered in opinion and flocking dynamics. The models are driven by different “rules of engagement”, which quantify how each member interacts with its immediate neighbors. We highlight the role of geometric vs. topological neighborhoods and distinguish between local and global interactions, while addressing the following two related questions. (i) How local rules of interaction lead, over time, to the emergence of consensus; and (ii) How the flocking behavior of large crowds captured by their hydrodynamic description.
20171108T150000
20171108T160000
0
Data Science Seminar: Eitan Tadmor (University of Maryland ) @ Hodson 203
free
ai1ec-10494@engineering.jhu.edu/ams
20180716T042154Z
Title: Market Efficiency with Micro and Macro Information
Abstract:
We propose a tractable, multi-security model in which investors allocate information processing capacity to acquire micro information about individual stocks and/or macro information about an index fund. Investors solve optimal portfolio selection and information allocation problems. In equilibrium, all investors are of one of three types: micro informed, macro informed, or uninformed. We investigate the implications for price efficiency and find an endogenous bias toward micro efficiency: over a range of parameter values prices will be more informative about micro than macro fundamentals. We explore the model’s implications for the cyclicality of investor information choices, for systematic and idiosyncratic return volatility, and for excess covariance and volatility. This is joint work with Harry Mamaysky.
No quantum mechanics background is required.
20171109T133000
20171109T143000
0
AMS Seminar: Paul Glasserman (Columbia University) @ Whitehead 304
free
ai1ec-10663@engineering.jhu.edu/ams
20180716T042154Z
What geometries can we learn from data?
In the field of manifold learning, the foundational theoretical results of Coifman and Lafon (Diffusion Maps, 2006) showed that for data sampled near an embedded manifold, certain graph Laplacian constructions are consistent estimators of the Laplace-Beltrami operator on the underlying manifold. Since these operators determine the Riemannian metric, they completely describe the geometry of the manifold (as inherited from the embedding). It was later shown that different kernel functions could be used to recover any desired geometry, at least in terms of pointwise estimation of the associated Laplace-Beltrami operator. In this talk I will first briefly review the above results and then introduce new results on the spectral convergence of these graph Laplacians. These results reveal that not all geometries are accessible in the stronger spectral sense. However, when the data set is sampled from a smooth density, there is a natural conformally invariant geometry which is accessible on all compact manifolds, and even on a large class of non-compact manifolds. Moreover, the kernel which estimates this geometry has a very natural construction which we call Continuous k-Nearest Neighbors (CkNN).
20171115T150000
20171115T160000
0
Data Science Seminar: Tyrus Berry (George Mason University) @ Hodson 203
free
ai1ec-10496@engineering.jhu.edu/ams
20180716T042154Z
Title: Transaction Clock, Stochastic Time Changes and Stochastic Volatility
Abstract: The first part of the talk will establish that, by No Arbitrage, the log – price process of a stock has to be a time-changed Brownian motion under the physical probability measure. Aggregate volume and number of trades are empirically tested as possible drivers of the stochastic clock allowing one to recover normality of stock returns.
The second part of the talk will show how stochastic volatility can be represented through a stochastic time change, outside the stochastic differential equations classically used for volatility in a number of founding models in Finance. This representation is particularly useful if one wishes to choose a Levy process (outside Brownian motion) for the stock log- price, as independent increments are contradicted by volatility clustering observed in financial markets. The CGMY process with stochastic volatility will be provided as an example.
20171116T133000
20171116T143000
0
AMS Seminar: Helyette Geman (JHU) @ Whitehead 304
free
ai1ec-10603@engineering.jhu.edu/ams
20180716T042154Z
Join alumni, faculty, and students for a Financial Mathematics alumni reunion, including speakers, networking event, and happy hour. Food and drink will be served. (Informal Happy hour to follow)
Register @ https://jhu.us6.list-manage.com/track/click?u=40512314224886c4ca8b856c2&id=836eea02d4&e=1907de3a2c
If you have any questions, please contact Sonjala Williams @ sonjala@jhu.edu
20171118T120000
20171118T160000
0
JHU Whiting School of Engineering AMS- Financial Math Alumni Reunion @ Glass Pavilion
free
ai1ec-10664@engineering.jhu.edu/ams
20180716T042154Z
TBA
20171129T150000
20171129T160000
0
Data Science Seminar: Yingzhou Li (Duke University) @ Hodson 203
free
ai1ec-10552@engineering.jhu.edu/ams
20180716T042154Z
Title: Distributed Synchronization in Engineering Networks
Abstract:
This talk presents a systematic study of synchronization on distributed (networked) systems that spans from theoretical modeling and stability analysis to distributed controller design, implementation and verification. We first focus on developing a theoretical foundation for synchronization of networked oscillators. We study how the interaction type (coupling) and network configuration (topology) affect the behavior of a population of heterogeneous coupled oscillators. Unlike existing literature that restricts to specific scenarios, we show that phase consensus (common phase value) can be achieved for arbitrary network topologies under very general conditions on the oscillators’ model.
We then focus on more practical aspects of synchronization on computer networks. Unlike existing solutions that tend to rely on expensive hardware to improve accuracy, we provide a novel algorithm that reduces jitter by synchronizing networked computers without estimating the frequency difference between clocks (skew) or introducing offset corrections. We show that a necessary and sufficient condition on the network topology for synchronization (in the presence of noise) is the existence of a unique leader in the communication graph. A Linux-based implementation on a cluster of IBM BladeCenter servers experimentally verifies that the proposed algorithm outperforms well-established solutions and that loops can help reduce jitter.
20171130T133000
20171130T143000
0
AMS Seminar: Enrique Mallada Garcia (JHU) @ Whitehead 304
free
ai1ec-10832@engineering.jhu.edu/ams
20180716T042154Z
The Graduate Career Advisor for Financial Math and Applied Math & Statistics will teach some strategies to make the most of the winter after fall classes end.
Learn how to best kick off or revamp your job or internship search!
*Food will be served, grab it 15 minutes before the event!
RSVP on Handshake @ https://app.joinhandshake.com/events/108603
Walk-ins welcome
20171205T180000
20171205T193000
0
Winter Wonderland: Strategies for Your Winter Job or Internship Search!
free
ai1ec-10444@engineering.jhu.edu/ams
20180716T042154Z
TBA
20171206T150000
20171206T160000
0
Data Science Seminar: Hau-Tieng Wu (Duke University) @ Hodson 203
free
ai1ec-10556@engineering.jhu.edu/ams
20180716T042154Z
Title: The Growing Importance of Satellite Data for Health and Air Quality Applications
Abstract:
Satellite data are growing in importance for health and air quality end users in the U.S. and around the world. From their “Gods-eye” view, satellites provide a level of spatial coverage unobtainable by surface monitoring networks. Satellite observations of various pollutants, such as nitrogen dioxide and sulfur dioxide, vividly demonstrate the steady improvement of air quality in the U.S. over the last several decades thanks to environmental regulations, such as the Clean Air Act. However, while better, U.S. air quality is still not at healthy levels and there are occasionally extreme events (e.g., wildfires, toxic spills in Houston after Hurricane Harvey) that expose Americans to high levels of pollution. Satellite data also show that air quality in many parts of the world is rapidly degrading, and is likely to continue to do so as the global population is expected to increase by 2 billion by 2050. In this presentation, I will discuss the strengths and limitations of current satellite data for health and air quality applications as well as the potential upcoming satellites offer. I will present examples of successful uses of satellite data, discuss potential uses, and highlight ongoing challenges (e.g., data processing and visualization) for satellite data end users.
Biographical Sketch
Dr. Bryan Duncan is an Earth scientist at NASA’s Goddard Space Flight Center and has a keen interest in using NASA satellite data for societal benefit, including for health and air quality applications. He frequently speaks to representatives of various U.S. and international agencies (e.g., World Bank, UNICEF) about how satellite data may benefit their objectives and is a member of the NASA Health and Air Quality Applied Sciences Team (HAQAST). He is also the Project Scientist of the NASA Aura satellite mission, which has observing air quality from space as one of its objectives.
20171207T133000
20171207T143000
0
The John C. & Susan S.G. Wierman Lecture Series: Bryan Duncan (NASA Goddard Space Flight Center) @ Olin 305
free
ai1ec-10963@engineering.jhu.edu/ams
20180716T042154Z
Title: Principled non-convex optimization for deep learning and phase retrieval
Abstract: This talk looks at two classes of non-convex problems. First, we discuss phase retrieval problems, and present a new formulation, called PhaseMax, that reduces this class of non-convex problems into a convex linear program. Then, we turn our attention to more complex non-convex problems that arise in deep learning. We’ll explore the non-convex structure of deep networks using a range of visualization methods. Finally, we discuss a class of principled algorithms for training “binarized” neural networks, and show that these algorithms have theoretical properties that enable them to overcome the non-convexities present in neural loss functions.
20180131T150000
20180131T160000
0
Data Science Seminar: Tom Goldsten (University of Maryland College Park) @ Gilman 219
free
ai1ec-10861@engineering.jhu.edu/ams
20180716T042154Z
Title: Approximating Minimal Cut-Generating Functions by Extreme Functions
Abstract:
With applications in scheduling, networks, and generalized assignment problems, integer programs are ubiquitous in a variety of engineering disciplines. Often, integer programming algorithms make use of strategically chosen cutting planes in order to trim the region bounded by the linear constraints without removing any feasible points. Recently, there has been a resurgence of interest in the theory of (minimal) cut generating functions, as such functions can be used to produce quality cuts. Moreover, the family of minimal functions forms a convex set; in order to better understand this class of functions, we wish to study the extreme functions of this set. In this talk, we shall see that the set of continuous minimal cut generating functions contains a dense subset of extreme function.
20180201T133000
20180201T143000
0
AMS Seminar: Teresa Lebair (JHU) @ Whitehead 304
free
ai1ec-10862@engineering.jhu.edu/ams
20180716T042154Z
Title: Limit theorems for eigenvectors of the normalized Laplacian for random graphs
Abstract:
“We prove a central limit theorem for the components of the eigenvectors corresponding to the d largest eigenvalues of the normalized Laplacian matrix of a finite dimensional random dot product graph. As a corollary, we show that for stochastic blockmodel graphs, the rows of the spectral embedding of the normalized Laplacian converge to multivariate normals and furthermore the mean and the covariance matrix of each row are functions of the associated vertex’s block membership. Together with prior results for the eigenvectors of the adjacency matrix, we then compare, via the Chernoff information between multivariate normal distributions, how the choice of embedding method impacts subsequent inference. We demonstrate that neither embedding method dominates with respect to the inference task of recovering the latent block assignments.”
20180208T133000
20180208T143000
0
AMS Seminar: Minh Hai Tang (JHU) @ Whitehead 304
free
ai1ec-10997@engineering.jhu.edu/ams
20180716T042154Z
Title: Monotonicity of optimal contracts without the first-order approach
Abstract:
We develop a simple sufficient condition for an optimal contract of a moral
hazard problem to be monotone in the output signal. Existing results on monotonicity
require conditions on the output distribution (namely, the monotone likelihood ratio
property (MLRP)) and additional conditions to guarantee that agent’s decision is
approachable via the first-order approach of replacing that
problem with its first-order conditions. We know of no positive monotonicity
results in the setting where the first-order approach does not apply. Indeed, it is
well-documented that when there are finitely-many possible outputs, and the
first-order approach does not apply, the MLRP alone is insufficient to guarantee monotonicity.
However, we show that when there is an interval of possible output signals,
the MLRP does suffice to establish monotonicity under additional technical assumptions
that do not guarantee the validity of the first-order approach.
This is joint work with Rongzhu Ke (Hong Kong Baptist University).
20180209T090000
20180209T100000
0
Optimization Seminar: Christopher Ryan (University of Chicago) @ Whitehead 304
free
ai1ec-10863@engineering.jhu.edu/ams
20180716T042154Z
Title: Maximum Likelihood Density Estimation under Total Positivity
Abstract: Nonparametric density estimation is a challenging problem in theoretical statistics—in general the maximum likelihood estimate (MLE) does not even exist! Introducing shape constraints allows a path forward. This talk offers an invitation to non-parametric density estimation under total positivity (i.e. log-supermodularity) and log-concavity. Totally positive random variables are ubiquitous in real world data and possess appealing mathematical properties. Given i.i.d. samples from such a distribution, we prove that the maximum likelihood estimator under these shape constraints exists with probability one. We characterize the domain of the MLE and show that it is in general larger than the convex hull of the observations. If the observations are 2-dimensional or binary, we show that the logarithm of the MLE is a tent function (i.e. a piecewise linear function) with “poles” at the observations, and we show that a certain convex program can find it. In the general case the MLE is more complicated. We give necessary and sufficient conditions for a tent function to be concave and supermodular, which characterizes all the possible candidates for the MLE in the general case.
20180215T133000
20180215T143000
0
AMS Seminar: Dr. Elina Robeva (MIT) @ Whitehead 304
free
ai1ec-11000@engineering.jhu.edu/ams
20180716T042154Z
Title: The Learning Premium
Abstract: We find equilibrium stock prices and interest rates in a
representative-agent model with uncertain dividends’ growth, gradually
revealed by dividends themselves, where asset prices are rational –
reflect current information and anticipate the impact of future
knowledge on future prices. In addition to the usual premium for risk,
stock returns include a learning premium, which reflects the expected
change in prices from new information. In the long run, the learning
premium vanishes, as prices and interest rates converge to their
counterparts in the standard setting with known growth. The model
explains the increase in price-dividend ratios of the past century if
both relative risk aversion and elasticity of intertemporal
substitution are above one. This is a joint work with Paolo Guasoni.
20180220T133000
20180220T143000
0
Financial Math Seminar: Maxim Bichuch (JHU) @ Whitehead 304
free
ai1ec-11096@engineering.jhu.edu/ams
20180716T042154Z
Title: Data-driven modeling of vector fields and differential forms by spectral exterior calculus
Abstract: We discuss a data-driven framework for exterior calculus on manifolds. This framework is based on a representations of vector fields, differential forms, and operators acting on these objects in frames (overcomplete bases) for L^2 and higher-order Sobolev spaces built entirely from the eigenvalues and eigenfunctions of the Laplacian of functions. Using this approach, we represent vector fields either as linear combinations of frame elements, or as operators on functions via matrices. In addition, we construct a Galerkin approximation scheme for the eigenvalue problem for the Laplace-de-Rham operator on 1-forms, and establish its spectral convergence. We present applications of this scheme to a variety of examples involving data sampled on smooth manifolds and the Lorenz 63 fractal attractor. This work is in collaboration with Tyrus Berry.
20180221T150000
20180221T160000
0
Data Science Seminar: Dimitris Giannakis (NYU) @ Shaffer 304
free
ai1ec-10630@engineering.jhu.edu/ams
20180716T042154Z
Title: Information, Computation, Optimization: Connecting the dots in the Traveling Salesman Problem
Abstract: Few math models scream impossible as loudly as the traveling salesman problem.
Given n cities, the TSP asks for the shortest route to take you to all of them.
Easy to state, but if P ≠ NP then no solution method can have good asymptotic
performance as n goes off to infinity. The popular interpretation is that we simply
cannot solve realistic examples. But this skips over nearly 70 years of intense
mathematical study. Indeed, in 1949 Julia Robinson described the TSP challenge in
practical terms: “Since there are only a finite number of paths to consider, the
problem consists in finding a method for picking out the optimal path when n is
moderately large, say n = 50.” She went on to propose a linear programming attack
that was adopted by her RAND colleagues Dantzig, Fulkerson, and Johnson several
years later.
Following in the footsteps of these giants, we show that a certain tour of 49,603
historic sites in the US is shortest possible, measuring distance with point-to-point
walking routes obtained from Google Maps. Along the way, we discuss the history,
applications, and computation of this fascinating problem.
Biographical Sketch
William Cook is a University Professor in Combinatorics and Optimization at the University of Waterloo, where he received his Ph.D. in 1983. Bill
was elected a SIAM Fellow in 2009, an INFORMS Fellow in 2010, a member of the National Academy of Engineering in 2011, and an American
Mathematics Society Fellow in 2012. He is the author of the popular book In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation.
Bill is a former Editor-in-Chief of the journals Mathematical Programming (Series A and B) and Mathematical Programming Computation. He is the past chair and current vice-chair of the Mathematical Optimization Society and a past chair of the INFORMS Computing Society.
20180222T133000
20180222T143000
0
The Goldman Distinguished Lecture Series: William Cook (University of Waterloo) @ Shaffer 100
free
ai1ec-10858@engineering.jhu.edu/ams
20180716T042154Z
Title: PetuumMed: algorithm and system for EHR-based medical decision-making
Abstract:
With the rapid growth of electronic health records (EHRs) and the advancement of machine learning technologies, needs for AI-enabled clinical decision-making support is emerging. In this talk, I will present some recent work toward these needs at Petuum Inc. where an integrative system that distills insights from large-scale and heterogeneous patient data, as well as learns and integrates medical knowledge from broader sources such as the literatures and domain experts, and empowers medical professionals to make accurate and efficient decisions within the clinical flow, is being built. I will discuss several aspects of practical clinical decision-support, such as real-time information extraction from clinical notes and images, diagnosis and treatment recommendation, automatic report generation and ICD code filling; and the algorithmic and computational challenges behind production-quality solution to these problems.
20180301T133000
20180301T143000
0
AMS Seminar: Eric Xing (Carnegie Mellon) @ Whitehead 304
free
ai1ec-11131@engineering.jhu.edu/ams
20180716T042154Z
Title: Variance swap
20180306T133000
20180306T143000
0
Financial Math Seminar: John Miller (JHU) @ Whitehead 304
free
ai1ec-10865@engineering.jhu.edu/ams
20180716T042154Z
Title: Comparing relaxations via volume for nonconvex optimization
Abstract: Practical exact methods for global optimization of mixed-integer nonlinear optimization formulations rely on convex relaxation. Then, one way or another (via refinement and/or disjunction), global optimality is sought. Success of this paradigm depends on balancing tightness and lightness of relaxations. We will investigate this from a mathematical viewpoint, comparing polyhedral relaxations via their volumes. Specifically, I will present some results concerning: fixed charge problems, vertex packing in graphs, boolean quadratic formulations, and convexification of monomials in the context of spatial branch-and-bound” for factorable formulations. Our results can be employed by users (at the modeling level) and by algorithm designers/implementers alike.
20180308T133000
20180308T143000
0
AMS Seminar: Jon Lee (University of Michigan) @ Whitehead 304
free
ai1ec-11106@engineering.jhu.edu/ams
20180716T042154Z
TBA
20180314T150000
20180314T160000
0
Data Science Seminar: Edriss Titi (Texas A&M University) @ Shaffer 304
free
ai1ec-10867@engineering.jhu.edu/ams
20180716T042154Z
Title: Bayesian monotone regression: Rates, coverage and tests
Abstract:
Shape restrictions such as monotonicity often naturally arise. In this talk we consider a Bayesian approach to monotone nonparametric regression with normal error. We assign a prior through piecewise constant functions and impose a conjugate normal prior on the coefficient. Since the resulting functions need not be monotone, we project samples from the posterior on the allowed parameter space to construct a “projection posterior”. We first obtain contraction rates of the projection posterior distributions under various settings. We next obtain the limit posterior distribution of a suitably centered and scaled posterior distribution for the function value at a point. The limit distribution has some interesting similarity and difference with the corresponding limit distribution for the maximum likelihood estimator. By comparing the quantiles of these two distributions, we observe an interesting new phenomenon that coverage of a credible interval may be more than the credibility level, an exact opposite of a phenomenon observed by Cox for smooth regression. We describe a recalibration strategy to modify the credible interval to meet the correct level of coverage. Finally we discuss asymptotic properties of Bayes tests for monotonicity.
This talk is based on joint work with Moumita Chakraborty, a doctoral student at North Carolina State University.
20180315T133000
20180315T143000
0
AMS Seminar: Subhashis Ghoshal (North Carolina State University) @ Whitehead 304
free
ai1ec-11155@engineering.jhu.edu/ams
20180716T042154Z
Presentation – Career Opportunities at the NSA Targeting AMS, CS, ECE, JHUISI
When: Wednesday, March 28, 2018 5:30 pm – 8:00 pm EDT
Where: Hodson, 110, Baltimore, MD 21218, United States
The National Security Agency (NSA) currently has opportunities for highly motivated researchers to provide expertise, guidance and support to the development and implementation of mission capabilities that align with mission driven challenges.
The Advanced Computing Systems Research Program (ACS) at NSA is looking to hire talented researchers for a variety of positions. The ACS mission is to collaborate with industry, academia, and the government to drive innovative research that will improve advanced computing systems for a range of mission applications including cybersecurity, cryptanalysis, and complex data analytics. The ACS has significant research projects in neuromorphic and probabilistic computing, novel computer architectures and technologies, advanced modeling and simulation, energy efficiency, productivity, and resilience.
Dr. David J. Mountain will describe opportunities at NSA, using his 36 year career as an example. He will also provide an overview of the ACS program and highlight specific research positions currently available. This will be followed by an open Q&A.
Dr. Mountain is the Senior Technical Director at the Laboratory for Physical Sciences at Research Park, a Department of Defense research lab in Catonsville, MD. He received a BS in Electrical Engineering from the University of Notre Dame in 1982, an MS in Electrical Engineering from the University of Maryland, College Park, in 1986, and a PhD in Computer Engineering from the University of Maryland, Baltimore County, in 2017. His personal research projects have included radiation effects studies, hot carrier reliability characterization, and chip-on-flex process development utilizing ultra-thin circuits. He has been actively involved with 3D electronics research for 25 years and is presently focused on specialized architectures to support advanced neural networks and tensor analysis. Dr. Mountain is the author of more than two dozen papers, has been awarded eight patents, and is a Senior Member of the IEEE.
Students RSVP @ https://app.joinhandshake.com/events/135261
Faculty and Staff RSVP @ https://www.eventbrite.com/e/career-opportunities-at-the-nsa-targeting-cs-ams-ece-jhuisi-information-session-tickets-43894265931?aff=affiliate1
Note:
*Food will be served at this even during the first 30 minutes, Please arrive early. The program will begin promptly at 6pm!*
For additional information about this event, please contact Dr. Antwan D. Clark at aclark66@jhu.edu.
20180328T173000
20180328T200000
0
Dr. David J. Mountain will present on Career Opportunities at the NSA in Hodson 110
free
ai1ec-10871@engineering.jhu.edu/ams
20180716T042154Z
Title: Real and Artificial Neural Networks
Abstract: Lately, just about everybody has been thinking about deep neural networks (DNNs). Do they work? If so, how? Do they overfit? If not, why? I will discuss these questions and suggest some uncomplicated answers, at least as a first approximation. Turning to biological learning, I will argue that the stubborn gap between human and machine performance, when it comes to interpretation (as opposed to classification), can not be substantially closed without architectures that support stronger representations. In particular, how are we to accommodate the rich collection of spatial and abstract relationships (‘on’ or ‘inside’, ‘talking’ or ‘holding hands’, ‘same’ or ‘different’) that bind parts and objects and define context? I will propose that the nonlinearities of dendritic integration in real neurons is the missing ingredient in artificial neurons. I will suggest a mechanism for embedding relationships in a generative network.
20180329T133000
20180329T143000
0
Duncan Lecture Series- AMS Seminar: Stuart Geman (Brown University) @ Shaffer 100
free
ai1ec-10874@engineering.jhu.edu/ams
20180716T042154Z
Title: Random Walks on Secondary Structure and the Folding of RNA
Abstract:
What is the correct characterization of the native structure of a protein or a non-coding RNA molecule? Is it the minimum energy state, a metastable state, or a sample from thermal equilibrium? The problem is unsettled and a topic of enduring debate. The “agnostic” approach is through molecular dynamics—make a proper accounting of the intramolecular forces and interactions with the surrounding liquid (mostly water), write down the corresponding kinetic equation (e.g. a Langevin equation), and simulate folding. But this usually isn’t practical, and approximations need to be made. I will explore an approximation in which the bulk of the folding process is replaced by a random walk, with discrete moves from one secondary structure to another. An analytic result identifies conditions for guaranteed accuracy. Simulation results, running the approximation against an optimized integrator of the Langevin equation, achieve the expected accuracy, but 100 times faster.
20180330T133000
20180330T143000
0
Duncan Lecture Series AMS Seminar: Stuart Geman (Brown University) @ Shaffer 100
free
ai1ec-11203@engineering.jhu.edu/ams
20180716T042154Z
Title: Machine Learning For Health Care
Abstract:
We will present multiple ways in which healthcare data is acquired and machine learning methods are currently being introduced into clinical settings.
This will include:
1.)Modeling the prediction of disease, including Sepsis, and ways in which the best treatment decisions for Sepsis patients can be made, from electronic health record (EHR) data using Gaussian processes and deep learning methods.
2) Predicting surgical complications and transfer learning methods for combining databases
3) Using mobile apps and integrated sensors for improving the granularity of recorded health data for chronic conditions. Current work in these areas will be presented and the future of machine learning contributions to the field will be discussed.
20180404T120000
20180404T130000
0
AMS & BME Presents Speaker: Katherine Heller @ Shaffer 101
free
ai1ec-10878@engineering.jhu.edu/ams
20180716T042154Z
Title: Adaptive Robust Control Under Model Uncertainty
Abstract: We propose a new methodology, called adaptive robust control, for solving a discrete-time Markovian control problem subject to Knightian uncertainty. We apply the general framework to a financial hedging problem where the uncertainty comes from the fact that the true law of the underlying model is only known to belong to a certain family of probability laws. We provide a learning algorithm that reduces the model uncertainty through progressive learning about the unknow system. One of the pillars in the proposed methodology is the recursive construction of the confidence sets for the unknown parameter. This allows, in particular, to establish the corresponding Bellman system of equations.
20180405T133000
20180405T143000
0
AMS Seminar: Igor Cialenco (Illinois Institute of Technology) @ Whitehead 304
free
ai1ec-11235@engineering.jhu.edu/ams
20180716T042154Z
Mark your calendars for the 5th USA Science & Engineering Festival Expo on April 7-8, 2018! Explore 3,000 hands-on exhibits from the world’s leading scientific and engineering societies, universities, government agencies, high-tech corporations and STEM organizations. The two-day Expo is perfect for children, teens, and families who want to inspire their curious minds.
Where: Walter E. Washington Convention Center
When: Saturday 10 am- 6 pm and Sunday 10 am- 4 pm
Join 350K attendees to celebrate science at the Expo and engage in activities with some of the biggest names in STEM. Hear stories of inspiration and courage, participate in mind-boggling experiments and rock out to science during our incredible stage shows.
20180407T100000
20180408T170000
0
USA Science & Engineering Festival Expo on April 7-8, 2018 in Washington, D.C.
free
thumbnail;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/USAEXPO_logo_fnl-hp.png;183;144,medium;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/USAEXPO_logo_fnl-hp.png;183;144,large;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/USAEXPO_logo_fnl-hp.png;183;144,full;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/USAEXPO_logo_fnl-hp.png;183;144
ai1ec-11249@engineering.jhu.edu/ams
20180716T042154Z
Date: Monday, April 9th, 2018
Time: 7:00 pm: Enjoy refreshments and snacks with students and faculty.
7:30 pm: The Mathemagics performance begins!
Location: Johns Hopkins University, Homewood Campus; Hodson Hall, Room 110
RSVP: Please fill out the form at: https://tinyurl.com/artbenjhusam
Or E-mail us at husam.jhu@gmail.com
20180409T190000
20180409T210000
0
HUSAM Presents: Mathemagics Night with Dr. Arthur Benjamin @Hodson Hall rm 110
free
thumbnail;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/ArtBenjaminposter-230x300.jpg;503;656,medium;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/ArtBenjaminposter-230x300.jpg;503;656,large;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/ArtBenjaminposter-230x300.jpg;503;656,full;https://engineering.jhu.edu/ams/wp-content/uploads/sites/44/2018/04/ArtBenjaminposter-230x300.jpg;503;656
ai1ec-10983@engineering.jhu.edu/ams
20180716T042154Z
Title: TBA
Abstract: TBA
20180410T133000
20180410T143000
0
Financial Math Seminar: Sebastien Bossu (NYU Courant & JHU Carey School) @ Whitehead Hall 304
free
ai1ec-10879@engineering.jhu.edu/ams
20180716T042154Z
Title: Optimal Portfolio under Fractional Stochastic Environment
Abstract:
Rough stochastic volatility models have attracted a lot of attention recently, in particular for the linear option pricing problem. In this talk, starting with power utilities, we propose to use a martingale distortion representation of the optimal value function for the nonlinear asset allocation problem in a (non-Markovian) fractional stochastic environment (for all Hurst index $H \in (0, 1)$). We rigorously establish a first order approximation of the optimal value, when the return and volatility of the underlying asset are functions of a stationary slowly varying fractional Ornstein-Uhlenbeck process. We prove that this approximation can be also generated by the zeroth order trading strategy providing an explicit strategy which is asymptotically optimal in all admissible controls. Furthermore, we extend the discussion to general utility functions, and obtain the asymptotic optimality of this strategy in a specific family of admissible strategies. If time permits, we will also discuss the problem under fast mean-reverting fractional stochastic environment.
20180412T133000
20180412T143000
0
AMS Seminar: Jean-Pierre Fouque (UCSB) @ Whitehead 304
free
ai1ec-11005@engineering.jhu.edu/ams
20180716T042154Z
Title: On the joint calibration of SPX and VIX options
Abstract: Since VIX options started trading in 2006, many researchers have attempted to build a model for the SPX that jointly calibrates to SPX and VIX options. In 2008, Jim Gatheral showed that a diffusive model could approximately, but not exactly, fit both markets. Later, others have argued that jumps in the SPX were needed to jointly calibrate both markets. We revisit this problem, asking the following questions: Does there exist a continuous model on the SPX that jointly calibrates to SPX options, VIX futures, and VIX options? If so, how to build one such model? If not, why? We present a novel approach based on the SPX smile calibration condition. In the limiting case of instantaneous VIX, the answers are clear and involve the timewise convex ordering of two distributions (local variance and instantaneous variance) and a novel application of martingale transport to finance. The real case of a 30-day VIX is more involved, as time-averaging and projection onto a filtration can undo convex ordering. We observe that in usual market conditions the distribution of VIX^2 in the local volatility model and the market-implied distribution of VIX^2 are not in convex order, and we show that fast mean-reverting volatility models and rough volatility models are able to reproduce this surprising behavior.
20180417T133000
20180417T143000
0
Financial Math Seminar: Dr. Julien Guyon (Columbia & NYU ) @ Whitehead 304
free
ai1ec-10880@engineering.jhu.edu/ams
20180716T042154Z
Title: Characterizing the Worst-Case Performance of Algorithms for Nonconvex Optimization
Abstract: Motivated by various applications, e.g., in data science, there has been increasing interest in numerical methods for minimizing nonconvex functions. Users of such methods often choose one algorithm versus another due to worst-case complexity guarantees, which in contemporary analyses bound the number of iterations required until a first- or second-order stationarity condition is approximately satisfied. In this talk, we question whether this is indeed the best manner in which to compare algorithms, especially since the worst-case behavior of an algorithm is often only seen when it is employed to minimize pedagogical examples which are quite distinct from functions seen in normal practice. We propose a new strategy for characterizing algorithms that attempts to better represent algorithmic behavior in real-world settings.
20180419T133000
20180419T143000
0
AMS Seminar: Frank Curtis (Lehigh University) @ Whitehead 304
free
ai1ec-11294@engineering.jhu.edu/ams
20180716T042154Z
Title: Merchant Options of Energy Trading Network
20180424T133000
20180424T143000
0
Financial Mathematics Seminar: Nicole Secomandi @ Whitehead 304
free
ai1ec-10884@engineering.jhu.edu/ams
20180716T042154Z
Title: “Statistical network modeling via exchangeable interaction processes”
Abstract:
Many modern network datasets arise from processes of interactions in a population, such as phone calls, e-mail exchanges, co-authorships, and professional collaborations. In such interaction networks, the interactions comprise the fundamental statistical units, making a framework for interaction-labeled networks more appropriate for statistical analysis. In this talk, we present exchangeable interaction network models and explore their basic statistical properties. These models allow for sparsity and power law degree distributions, both of which are widely observed empirical network properties. I will start by presenting the Hollywood model, which is computationally tractable, admits a clear interpretation, exhibits good theoretical properties, and performs reasonably well in estimation and prediction.
In many settings, the series of interactions are structured. E-mail exchanges, for example, have a single sender and potentially multiple receivers. I will introduce hierarchical exchangeable interaction models for the study of structured interaction networks. In particular, I will introduce the Enron model as a canonical example, which partially pools information via a latent, shared population-level distribution. A detailed simulation study and supporting theoretical analysis provide clear model interpretation, and establish global power-law degree distributions. A computationally tractable Gibbs sampling algorithm is derived. Inference will be shown on the Enron e-mail dataset. I will end with a discussion of how to perform posterior predictive checks on interaction data. Using these proposed checks, I will show that the model fits the data well.
20180426T133000
20180426T143000
0
AMS Seminar: Walter Dempsey (Harvard University) @ Whitehead 304
free
ai1ec-10888@engineering.jhu.edu/ams
20180716T042154Z
Title: Computational Anatomy: Structuring and Searching Shape Spaces.
Abstract: 100 years after the celebrated D’Arcy Thompson’s masterpiece “Growth and Forms”, the modeling and the understanding of both variability and dynamics of related biological shapes are still particularly challenging from both modeling and computational point of view. The luminous idea of his “Theory of Transformations” has been turned within the digital era into a versatile mathematical and computational framework coined as diffeomorphometry and living in the vicinity of riemannian geometry, fluid dynamics, optimal control and statistics. We will discuss about the mathematical side of this framework as well as some of challenges that still need to be faced.
20180503T133000
20180503T143000
0
AMS Seminar: Alain Trouve’ (Ecole Normale Supe’rieure) @ Whitehead 304
free