**Title
: **Shape Spaces of Curves

**Abstract: **The
talk will discuss results\, old and new\, on a class of metrics on length
-normalized curves in d dimensions\, represented by their unit tangents e
xpressed as a function of arc-length considered as functions from the unit
interval to the d-dimensional unit sphere. These metrics are derived from
the combined action of diffeomorphisms (change of parameters) and arc-len
gth-dependent rotation acting on the tangent. Minimizing a Riemannian metr
ic balancing a right-invariant metric on diffeomorphisms and an L2 norm on
the motion of tangents leads to a special case of “metamorphosis\,” for w
hich the computation of geodesic distances can be dramatically simplified
after a suitable transformation of the curves into elements of a Hilbert s
phere.

**Title
: **Complexity in Simple Cross-Sectional Data with Binary Disease O
utcome

**Abstract: **Cross-sectionally sampled data
with binary disease outcome are commonly collected and analyzed in observa
tional studies for understanding how covariates correlate with disease oc
currence. At Hopkins SPH and SOM\, cross-sectional data analyses are also
commonly included in master and doctoral dissertations. This talk will ad
dress two questions: (1) Which risk can be identified in a commonly adopt
ed model (such as the logistic model)? (2) Are there problems when inter
preting the identifiable risk? As the progression of a disease typically i
nvolves both disease status and duration\, this talk considers how the bi
nary disease outcome is connected to the progression of disease through th
e birth-illness-death process. In general\, we conclude that the distribu
tion of cross-sectional binary outcome could be very different from the
population risk distribution. The cross-sectional risk probability is det
ermined jointly by the population risk probability together with the rati
o of duration of diseased state to the duration of disease-free state. Usi
ng the logistic model as an illustrating example\, we examine the bias f
rom cross-sectional data and argue that the bias can almost never be avoi
ded. We present an approach which treats the binary outcome as a specific
type of current status data and offers a compromised model on the basis of
an age-specific risk probability (ARP)\, though the interpretation of th
e ARP itself could also be questioned. An analysis based on Alzheimer’s d
isease data is presented to illustrate the ARP approach and data complexit
y. (This is joint work with Yuchen Yang\, Department of Biostatistics\, Jo
hns Hopkins University).

\n

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12878@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Towards Robust and Scalable Private Data Analysis\nAbstr act:\nIn the current age of big data\, we are constantly creating new data which is analyzed by various platforms to improve service and user’s expe rience. Given the sensitive and confidential nature of these data\, there are obvious security and privacy concerns while storing and analyzing suc h data. In this talk\, I will discuss the fundamental challenges in provid ing robust security and privacy guarantee while storing and analyzing larg e data. I will also give a brief overview of my contributions and future p lans towards addressing these challenges.\nTo give a glimpse of these chal lenges in providing a robust privacy guarantee known as differential priva cy\, I will use spectral sparsification of graphs as an example. Given the ubiquitous nature of graphs\, differentially private analysis on graphs h as gained a lot of interest. However\, existing algorithms for these analy ses are tailored made for the task at hand making them infeasible in pract ice. In this talk\, I will present a novel differentially private algorith m that outputs a spectral sparsification of the input graph. At the core o f this algorithm is a method to privately estimate the importance of an ed ge in the graph. Prior to this work\, there was no known privacy preservin g method that provides such an estimate or spectral sparsification of grap hs.\nSince many graph properties are defined by the spectrum of the graph\ , this work has many analytical as well as learning theoretic applications . To demonstrate some applications\, I will show more efficient and accura te analysis of various combinatorial problems on graphs and the first tech nique to perform privacy preserving manifold learning on graphs. DTSTART;TZID=America/New_York:20190211T100000 DTEND;TZID=America/New_York:20190211T110000 SEQUENCE:0 SUMMARY:Mathematics Seminar- Jalaj Upadhyay (Computer Science): Optimizatio n and Discrete @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/mathematics-seminar-optimization -and-discrete-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Towards Robust and Scalable Private Data Analysis

In the current age of big data\, we are cons tantly creating new data which is analyzed by various platforms to improve service and user’s experience. Given the sensitive and confidential natu re of these data\, there are obvious security and privacy concerns while s toring and analyzing such data. In this talk\, I will discuss the fundamen tal challenges in providing robust security and privacy guarantee while st oring and analyzing large data. I will also give a brief overview of my co ntributions and future plans towards addressing these challenges.

\nTo give a glimpse of these challenges in providing a robust privacy guaran tee known as differential privacy\, I will use spectral sparsification of graphs as an example. Given the ubiquitous nature of graphs\, differential ly private analysis on graphs has gained a lot of interest. However\, exis ting algorithms for these analyses are tailored made for the task at hand making them infeasible in practice. In this talk\, I will present a novel differentially private algorithm that outputs a spectral sparsification of the input graph. At the core of this algorithm is a method to privately e stimate the importance of an edge in the graph. Prior to this work\, there was no known privacy preserving method that provides such an estimate or spectral sparsification of graphs.

\nSince many graph properties are defined by the spectrum of the graph\, this work has many analytical as w ell as learning theoretic applications. To demonstrate some applications\, I will show more efficient and accurate analysis of various combinatorial problems on graphs and the first technique to perform privacy preserving manifold learning on graphs.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12771@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Optimization and Topology: Two Stories\nAbstract: Algebr aic topology and optimization are typically not considered as closely rela ted fields of mathematics. We will present two stories of fruitful interac tion between these two fields\, with the implications going the opposite w ay in the two cases.\nIn the first result\, we consider the question of th e existence of certain nice decompositions of generalized surfaces called currents in geometric measure theory. In the finite setting\, we could use tools from algebraic topology to pose this question as that of the existe nce of integer solutions to a certain linear programming (LP) problem. Fol lowing classical results on LP that rely on total unimodularity (TU) of ma trices\, the answer is known in codimension 1. We develop tools to push th is result to the infinite case\, showing that under certain assumptions th e TU result from LP implies the existence result for codimension 1 current s in general.\nIn the second story\, we consider new approaches to charact erize the robustness of solutions to a system of nonlinear equations. This problem arises in many applications such as the power grid and other infr astructure networks. We use techniques from algebraic topology (topologica l degree theory) to characterize the robustness margin of such systems of equations. We then cast the problem of checking for the specified conditio ns as a nonlinear optimization problem. Based on this formulation\, we dev elop efficient computational techniques to estimate lower and upper bounds for the robustness margin.\n \n DTSTART;TZID=America/New_York:20190214T133000 DTEND;TZID=America/New_York:20190214T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Bala Krishnamorthy (Washington State University) @ Whi tehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-bala-krishnamorthy-w ashington-state-university-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Optimization and Topology: Two Stories

**Abstract
: **Algebraic topology and optimization are typically not considere
d as closely related fields of mathematics. We will present two stories of
fruitful interaction between these two fields\, with the implications goi
ng the opposite way in the two cases.

In the first result\, we con sider the question of the existence of certain nice decompositions of gene ralized surfaces called currents in geometric measure theory. In the finit e setting\, we could use tools from algebraic topology to pose this questi on as that of the existence of integer solutions to a certain linear progr amming (LP) problem. Following classical results on LP that rely on total unimodularity (TU) of matrices\, the answer is known in codimension 1. We develop tools to push this result to the infinite case\, showing that unde r certain assumptions the TU result from LP implies the existence result f or codimension 1 currents in general.

\nIn the second story\, we con sider new approaches to characterize the robustness of solutions to a syst em of nonlinear equations. This problem arises in many applications such a s the power grid and other infrastructure networks. We use techniques from algebraic topology (topological degree theory) to characterize the robust ness margin of such systems of equations. We then cast the problem of chec king for the specified conditions as a nonlinear optimization problem. Bas ed on this formulation\, we develop efficient computational techniques to estimate lower and upper bounds for the robustness margin.

\n\n

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12776@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Mortgage Credit\, Aggregate Demand\, and Unconventional Monetary Policy\nAbstract: I develop a quantitative model of the mortgage market operating in an economy with financial frictions and nominal rigidi ties. I use this model to study the effectiveness of large-scale asset pur chases (LSAPs) by a central bank as a tool of monetary policy. When negati ve shocks hit\, homeowner and financial sector balance sheets are impaired \, borrowing constraints bind\, asset prices and aggregate demand drop\, h ampering the transmission of conventional monetary policy. LSAPs boost agg regate demand in a crisis by directing additional lending to homeowners\, raising house prices\, and stablishing expectations of future financial st ability. However\, legacy household debt depresses output and consumption in recovery. In the long run\, a commitment to ongoing use of LSAPs in cri ses reduces credit and business cycle volatility and redistributes resourc es from borrowers and intermediaries to savers. DTSTART;TZID=America/New_York:20190221T133000 DTEND;TZID=America/New_York:20190221T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Vadim Elenev (JHU-Carey Business School) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-vadim-elenev-jhu-bus iness-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Mortgage Credit\, Aggregate Demand\, and Unconventional Monetar
y Policy

**Abstract: **I develop a quantitative model
of the mortgage market operating in an economy with financial frictions a
nd nominal rigidities. I use this model to study the effectiveness of larg
e-scale asset purchases (LSAPs) by a central bank as a tool of monetary po
licy. When negative shocks hit\, homeowner and financial sector balance sh
eets are impaired\, borrowing constraints bind\, asset prices and aggregat
e demand drop\, hampering the transmission of conventional monetary policy
. LSAPs boost aggregate demand in a crisis by directing additional lending
to homeowners\, raising house prices\, and stablishing expectations of fu
ture financial stability. However\, legacy household debt depresses output
and consumption in recovery. In the long run\, a commitment to ongoing us
e of LSAPs in crises reduces credit and business cycle volatility and redi
stributes resources from borrowers and intermediaries to savers.

**Title
:** Big Data is Low Rank

**Abstract: **Matri
ces of low rank are pervasive in big data\, appearing in recommender syste
ms\, movie preferences\, topic models\, medical records\, and genomics.

While there is a vast literature on how to exploit low rank structur e in these datasets\, there is less attention on explaining why low rank s tructure appears in the first place.

\nIn this talk\, we explain the abundance of low rank matrices in big data by proving that certain latent variable models associated to piecewise analytic functions are of log-ran k. Any large matrix from such a latent variable model can be approximated\ , up to a small error\, by a low rank matrix.

\nArmed with this theo rem\, we show how to use a low rank modeling framework to exploit low rank structure even for datasets that are not numeric\, with applications in t he social sciences\, medicine\, retail\, and machine learning.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12781@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Market Microstructure Invariance: A Dynamic Equilibrium Model\nAbstract: Invariance relationships are derived in a dynamic\, infin ite-horizon\, equilibrium model of adverse selection with risk-neutral inf ormed traders\, noise traders\, risk-neutral market makers\, and endogenou s information production. Scaling laws for bet size and transaction costs require the assumption that the effort required to generate one bet does n ot vary across securities and time. Scaling laws for pricing accuracy and market resiliency require the additional assumption that private informati on has the same signal-to-noise ratio across markets. Prices follow a mart ingale with endogenously derived stochastic volatility. Returns volatility \, pricing accuracy\, market depth\, and market resiliency are closely rel ated to one another. The model solution depends on two state variables: st ock price and hard-to- observe pricing accuracy. Invariance makes predicti ons operational by expressing them in terms of log-linear functions of eas ily observable variables such as price\, volume\, and volatility.\n DTSTART;TZID=America/New_York:20190307T133000 DTEND;TZID=America/New_York:20190307T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Albert “Pete” Kyle (University of MD) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-albert-pete-kyle-uni versity-of-md-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **Market Microstructure Invariance: A Dynamic Equilibrium Model\n

**Abstract: **Invariance relationships are derived in
a dynamic\, infinite-horizon\, equilibrium model of adverse selection with
risk-neutral informed traders\, noise traders\, risk-neutral market maker
s\, and endogenous information production. Scaling laws for bet size and t
ransaction costs require the assumption that the effort required to genera
te one bet does not vary across securities and time. Scaling laws for pric
ing accuracy and market resiliency require the additional assumption that
private information has the same signal-to-noise ratio across markets. Pri
ces follow a martingale with endogenously derived stochastic volatility. R
eturns volatility\, pricing accuracy\, market depth\, and market resilienc
y are closely related to one another. The model solution depends on two st
ate variables: stock price and hard-to- observe pricing accuracy. Invarian
ce makes predictions operational by expressing them in terms of log-linear
functions of easily observable variables such as price\, volume\, and vol
atility.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12786@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Uncertainty Quantification and Nonparametric Inference f or Complex Data and Simulations\nAbstract: Recent technological advances have led to a rapid growth in not just the amount of scientific data but a lso their complexity and richness. Simulation models have\, at the same ti me\, become increasingly detailed and better at capturing the underlying p rocesses that generate observable data. On the statistical methods front\, however\, we still lack tools that accurately quantify complex relationsh ips between data and model parameters\, as well as adequate tools to valid ate models of multivariate likelihoods and posteriors. In this talk\, I wi ll discuss our current work on addressing some of the multi-faceted challe nges encountered in astronomy but more generally applicable to fields invo lving massive amounts of complex data and simulations\; in particularly\, challenges related to (i) building conditional probability models that can handle inputs of different modalities\, e.g. photometric data and correla tion functions\, (ii) estimating non-Gaussian likelihoods and posteriors v ia simulations\, and (iii) assessing the performance of complex models and simulations when the true distributions are not known. I will draw exampl es from photometric redshift estimation and from the inference of cosmolog ical parameters. (Part of this work is joint with Rafael Izbicki\, Taylor Pospisil\, Peter Freeman\, Ilmun Kim\, and the LSST-DESC PZ working group) DTSTART;TZID=America/New_York:20190314T133000 DTEND;TZID=America/New_York:20190314T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Ann Lee (Carnegie Mellon University) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-ann-lee-carnegie-mel lon-university-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Uncertainty Quantification and Nonparametric Inference for Comp
lex Data and Simulations

**Abstract: **Recent techno
logical advances have led to a rapid growth in not just the amount of scie
ntific data but also their complexity and richness. Simulation models have
\, at the same time\, become increasingly detailed and better at capturing
the underlying processes that generate observable data. On the statistica
l methods front\, however\, we still lack tools that accurately quantify c
omplex relationships between data and model parameters\, as well as adequa
te tools to validate models of multivariate likelihoods and posteriors. In
this talk\, I will discuss our current work on addressing some of the mul
ti-faceted challenges encountered in astronomy but more generally applicab
le to fields involving massive amounts of complex data and simulations\; i
n particularly\, challenges related to (i) building conditional probabilit
y models that can handle inputs of different modalities\, e.g. photometric
data and correlation functions\, (ii) estimating non-Gaussian likelihoods
and posteriors via simulations\, and (iii) assessing the performance of c
omplex models and simulations when the true distributions are not known. I
will draw examples from photometric redshift estimation and from the infe
rence of cosmological parameters. (Part of this work is joint with Rafael
Izbicki\, Taylor Pospisil\, Peter Freeman\, Ilmun Kim\, and the LSST-DESC
PZ working group)

**Title
:** Guiding clinical and preclinical investigations of breast cance
r with mathematical modeling and analyses

**Abstract**: One of the great challenges for cancer treatment is the inability to op
timize therapy. Without a reasonable mathematical framework\, our ability
to select treatment regimens for the individual patient is fundamentally l
imited to trial and error. Presented here are examples of data-driven\, in
tegrated experimental-mathematical approaches to studying breast cancer’s
response to therapy for both pre-clinical and clinical investigations. The
preclinical model\, consisting of ODEs\, connects various experiments for
an *in vivo* mouse system to better understand the interactions of
the immune response and targeted therapy for breast cancer. The clinical
model is a 3D PDE system for predicting tumor response to neoadjuvant ther
apy using patient-specific data that lays the groundwork for optimizing ch
emotherapeutic dosing and scheduling. In both examples\, the results of un
certainty and sensitivity analyses are discussed to show how they can be u
sed to generate experimentally testable hypotheses\, narrow the scope for
experimental investigations\, and evolve mathematical models. Additionally
\, multi-scale models are proposed that bridge the gap between *in vitr
o* and *in vivo* experiments to step towards clinical translati
on.

**Title: **Min-Max Relations for Packi
ng and Covering

**Abstract: **We consider a family M
of subsets of a finite set E. A “cover” is a subset of E that intersects e
very member of the family M. A “packing” is a set of members of M no two o
f which intersect. Clearly\, the cardinality of a packing is at most that
of a cover. We study conditions under which the maximum cardinality of a p
acking equals the minimum cardinality of a cover. We present recent result
s obtained jointly with Ahmad Abdi and Dabeen Lee.

**Bio: Gerard Cornuejols is professor of Operations Research at Carnegie
Mellon University. His research interests are in integer programming and c
ombinatorial optimization. He received the Lanchester Prize twice (1978 an
d 2015)\, the Fulkerson Prize (2000)\, the Dantzig Prize (2009) and the vo
n Neumann Theory Prize (2011).**

**Title
: **Robust inference with the knockoff filter.

**Abst
ract: **In this talk\, I will present ongoing work on the knockoff
filter for inference in regression. In a high-dimensional model selection
problem\, we would like to select relevant features without too many false
positives. The knockoff filter provides a tool for model selection by cre
ating knockoff copies of each feature\, testing the model selection algori
thm for its ability to distinguish true from false covariates to control t
he false positives. In practice\, the modeling assumptions that underlie t
he construction of the knockoffs may be violated\, as we cannot know the e
xact dependence structure between the various features. Our ongoing work a
ims to determine and improve the robustness properties of the knockoff fra
mework in this setting. We find that when knockoff features are constructe
d using estimated feature distributions whose errors are small in a KL div
ergence type measure\, the knockoff filter provably controls the false dis
covery rate at only a slightly higher level. This work is joint with Emman
uel Candès and Richard Samworth.

This is joint work with Emmanuel Candès\, Aa ditya Ramdas\, and Ryan Tibshirani.

\n**Bio: **T
BA

**Title:**
Distribution free prediction: Is conditional inference possible?

** Abstract:** We consider the problem of distribution-free predict
ive inference\, with the goal of producing predictive coverage guarantees
that hold conditionally rather than marginally. Existing methods such as c
onformal prediction offer marginal coverage guarantees\, where predictive
coverage holds on average over all possible test points\, but this is not
sufficient for many practical applications where we would like to know tha
t our predictions are valid for a given individual\, not merely on average
over a population. On the other hand\, exact conditional inference guaran
tees are known to be impossible without imposing assumptions on the underl
ying distribution. In this work we aim to explore the space in between the
se two\, and examine what types of relaxations of the conditional coverage
property would alleviate some of the practical concerns with marginal cov
erage guarantees while still being possible to achieve in a distribution-f
ree setting.

This is joint work with Emmanuel Candès\, Aaditya Ramdas\ , and Ryan Tibshirani.

\n**Bio: **TBA

**Title
: **“Real-time” optimization under forwa
rd rank-dependent processes: time-consistent optimality under probability
distortions

**Abstract: **Forward performance processes are defined via time-consistent optim
ality and incorporate “real-time” incoming information. On the other hand\
, popular performance criteria – for example\, mean-variance optimization\
, hyperbolic discounting\, probability distortions – are by nature time-in
consistent. How to define forward performance criteria in time-inconsisten
t settings then becomes a challenging problem\, both conceptually and tech
nically. In this talk\, I will discuss the case of probability distortions
and introduce the concept of forward rank-dependent performance processes
. Among others\, I will show how forward probability distortions are affec
ted by “real-time” changes in the stochastic environment and\, also\, pres
ent a striking equivalence between forward rank-dependent criteria and tim
e-monotone forward processes under appropriate measure-changes. A byproduc
t of the work is a novel result on the so-called dynamic utilities and on
time-inconsistent problems in the classical (backward) setting. \n
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-12817@engineering.jhu.edu/ams
DTSTAMP:20210419T013319Z
CATEGORIES:
CONTACT:
DESCRIPTION:Title: Uncertainty propagation in mechanics and materials by de
sign based on surrogate model development\nAbstract: With the onset of adv
anced manufacturing capabilities and in situ characterization techniques\,
simultaneous material/structural design is becoming increasingly feasible
for maximum structural performance. At the heart of such design processes
is the availability of multi-scale mechanics models that incorporate expl
icit representation of the material (such as microstructural descriptors)
and the structure (such as the geometry). A major challenge here is that a
full physically-based multi-scale model is often computationally infeasib
le. Surrogate functions that provide a simplified representation of the ma
terial provide a much more efficient alternative. Such surrogate functions
also enable a quantification of the propagation of uncertainties between
scales. While these surrogate functions do increase efficiency\, they lead
to a number of associated challenges. If the material is represented by a
large number of microstructural parameters\, then the high dimensionality
of the surrogate function requires many samples in order to build an accu
rate surrogate. Furthermore\, some micro-scale behavior\, such as sudden d
amage\, can lead to discontinuities in the surrogate function\, which make
s it difficult to interpolate or collocate the results. This seminar will
describe a number of approaches to building surrogates\, including cases i
n which the micro-scale model provides key response values and/or gradient
s of key response values.
DTSTART;TZID=America/New_York:20190418T133000
DTEND;TZID=America/New_York:20190418T143000
SEQUENCE:0
SUMMARY:AMS Seminar: Lori Brady (JHU-Civil Eng) @ Whitehead 304
URL:https://engineering.jhu.edu/ams/events/ams-seminar-lori-brady-jhu-civil
-eng-whitehead-304/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Uncertainty propagation in mechanics and materials by design ba
sed on surrogate model development

**Abstract: **With
the onset of advanced manufacturing capabilities and in situ characteriza
tion techniques\, simultaneous material/structural design is becoming incr
easingly feasible for maximum structural performance. At the heart of such
design processes is the availability of multi-scale mechanics models that
incorporate explicit representation of the material (such as microstructu
ral descriptors) and the structure (such as the geometry). A major challen
ge here is that a full physically-based multi-scale model is often computa
tionally infeasible. Surrogate functions that provide a simplified represe
ntation of the material provide a much more efficient alternative. Such su
rrogate functions also enable a quantification of the propagation of uncer
tainties between scales. While these surrogate functions do increase effic
iency\, they lead to a number of associated challenges. If the material is
represented by a large number of microstructural parameters\, then the hi
gh dimensionality of the surrogate function requires many samples in order
to build an accurate surrogate. Furthermore\, some micro-scale behavior\,
such as sudden damage\, can lead to discontinuities in the surrogate func
tion\, which makes it difficult to interpolate or collocate the results. T
his seminar will describe a number of approaches to building surrogates\,
including cases in which the micro-scale model provides key response value
s and/or gradients of key response values.

**Title: **Model
ing Particulate Air Pollution for Inference About Neurodegenerative Effect
s

**Abstract: **Evidence is accumulating to support a
link between chronic air pollution exposures and neurotoxic effects. For
instance\, EPA’s most recent Integrated Science Assessment for particulat
e matter concluded that the associations between PM_{2.5} and nerv
ous system effects\, including brain inflammation\, oxidative stress\, red
uced cognitive function\, and neurodegeneration\, are likely causal. We a
re conducting an epidemiologic cohort study\, the Adult Changes in Thought
Air Pollution (ACT-AP) study\, to determine whether\, in an elderly popul
ation free of dementia at baseline\, long-term air pollution exposure is a
ssociated with cognitive decline\, incidence of Alzheimer’s disease and al
l-cause dementia\, and adverse neuruopathological changes in brain tissue.
For exposure assessment in this study we are modeling criteria air pollu
tants using existing regulatory monitoring data supplemented with measurem
ents from low-cost sensors. One important scientific question we are addr
essing is whether low-cost sensor data improve our ability to quantify PM<
sub>2.5 exposure in the Puget Sound. I will discuss our approach an
d our preliminary conclusions that suggest that low-cost sensor can improv
e exposure assessment in epidemiologic cohort studies. I will also descri
be the innovative mobile monitoring campaign we have just started. We de
signed this campaign with epidemiologic inference in mind\; it will allow
us to estimate whether there are adverse effects to the brain associated w
ith infrequently monitored traffic-related pollutants\, including ultrafin
e particles and black carbon.

**Bio: **Dr. Sheppard i
s Professor and Assistant Chair of Environmental and Occupational Health S
ciences and Professor of Biostatistics. Her current research portfolio inc
ludes several studies of air pollution exposures and their neurotoxicant e
ffects. She has a Ph.D. in biostatistics. Her methodologic interests cente
r on observational study methods\, exposure modeling\, and epidemiology\,
and\; her applied research focuses on the the health effects of occupation
al and environmental exposures. She is principal investigator of a NIH-fun
ded training grant called Biostatistics\, Epidemiologic & Bioinformatics T
raining in Environmental Health and SURE-EH\, a project to promote diversi
ty in the environmental health sciences. She leads the biostatistical core
s for several projects and collaborates with DEOHS faculty on air pollutio
n cohort studies\, identifying the effects of multipollutant exposures\, a
nd studying manganese exposures. She is a member of the Epidemiology edito
rial board\, the Health Effects Institute Review Committee\, the EPA Clean
Air Scientific Advisory Committee \, and has served on the several EPA Sc
ientific Advisory Panels\, most recently for the Carcinogenic Potential of
Glyphosate. Board Chemical Assessment Advisory Committees for Ethylene Ox
ide Review and for Toxicological Review of Libby Amphibole Asbestos.

**Title
: **Enter the matrix: interpreting biological systems through matri
x factorization and transfer learning of single cell data

**
Abstract: **Next generation and single cell sequencing have ushered
in an era of big data in biology. These data present an unprecedented op
portunity to learn new mechanisms and ask unasked questions. Matrix facto
rization (MF) techniques can reveal low-dimensional structure from high-di
mensional data to uncover new biological knowledge. The knowledge of gain
ed from low dimensional features in training data can also be transferred
to new datasets to relate disparate model systems and data modalities. We
illustrate the power of these techniques for interpretation of high dimen
sional data through case studies in postmortem tissues from GTEx\, acquire
d therapeutic resistance in cancer\, and developmental biology.

AMS New Stude nt Picnic will be held in Great Hall at 12pm.

\nCome out to meet and greet other incoming graduate students and your professors.

\n< /HTML> END:VEVENT BEGIN:VEVENT UID:ai1ec-13629@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: TBA\n \nAbstract: TBA DTSTART;TZID=America/New_York:20190912T133000 DTEND;TZID=America/New_York:20190912T143000 SEQUENCE:0 SUMMARY:AMS Seminar: CANCELLED @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-9/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: TBA**

\n

**Abstract: TBA**

**Title
: **Diffeomorphic Learning

**Abstract: **The t
alk introduces a learning paradigm in which the training data is transform
ed by a diffeomorphic transformation before prediction. The learning algor
ithm minimizes a cost function evaluating the prediction error on the tran
sformed training set penalized by the distance between the diffeomorphism
and the identity. The approach borrows ideas from shape analysis\, in the
way diffeomorphisms are estimated for shape and image alignment\, to place
them in a mostly unexplored setting\, estimating\, in particular diffeomo
rphisms in much larger dimensions. After introducing the concept and descr
ibing a learning algorithm\, diverse applications will be presented\, mos
tly with synthetic examples\, demonstrating the potential of the approach.

**Title
: **Mobius Registration

**Abstract: **Conforma
l spherical parametrizations of genus-zero surfaces have been explored as
a way to represent surfaces over a canonical domain. This\, in turn\, make
s it possible to establish correspondences between pairs of shapes\, enabl
ing applications like interpolation and detail transfer. However\, one cha
llenge of using these parametrizations is that they are only unique up to
Mobius transformation. As such\, to use these parametrization for establis
hing correspondences between shapes\, it is first necessary to register th
e two spherical parametrizations with respect to the Mobius transformation
s. That is\, to find the Mobius transformation best aligning the two spher
ical maps.

In this talk we will address the problem of Mobius regi stration by expresing the space of Mobius transformations as the compositi on of inversions and rotations. We will show that these two classes of tra nsformation are fundamentally different: Each spherical parametrization ca n be canonically normalized to remove inversion ambiguity. And\, using exi sting techniques for fast correlation over SO(3)\, pairs of spherical para metrizations can be rotationally aligned. We will consider implications of inversion normalization for the calculation of spherical orbifolds and wi ll conclude by discussing why Mobius registration may not be sufficient.\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13640@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Multivariate Pareto Records\nAbstract: Consider i.i.d. d -dimensional observations with independent coordinates\, each with (say) t he standard Exponential distribution. Say that the n-th observation sets a (Pareto) record if it is not dominated by any of the first n – 1 observa tions. If k is in {1\, …\, n}\, say that the k-th observation is a curren t record at time n if it sets a record and is not dominated by any of the next n – k observations\; and say that the n-th observation breaks the rec ord set by the k-th observation if the k-th observation is a current recor d at time n – 1 but not at time n.\nWe will discuss one or more of the fol lowing topics: (i) an efficient algorithm for the simulation of Pareto rec ords\, and its (partial) analysis\; (ii) the location and thickness of the record frontier\; (iii) how the Geometric(1/2) distribution arises in con nection with the breaking of bivariate records.\nThis is joint work with D aniel Q. Naiman. DTSTART;TZID=America/New_York:20191003T133000 DTEND;TZID=America/New_York:20191003T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Jim Fill (JHU- AMS) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-12 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
**: Multivariate Pareto Records

**Abstract: **
Consider i.i.d. *d*-dimensional observations with independent coord
inates\, each with (say) the standard Exponential distribution. Say that
the *n*-th observation *sets a (Pareto) record* if it is not
dominated by any of the first *n* – 1 observations. If *k*
is in {1\, …\, *n*}\, say that the *k*-th observation is a
*current record* at time *n* if it sets a record and is not
dominated by any of the next *n* – *k* observations\; and sa
y that the *n*-th observation *breaks the record* set by the
*k*-th observation if the *k*-th observation is a current r
ecord at time *n* – 1 but not at time *n*.

We will d iscuss one or more of the following topics: (i) an efficient algorithm for the simulation of Pareto records\, and its (partial) analysis\; (ii) the location and thickness of the record frontier\; (iii) how the Geometric(1/ 2) distribution arises in connection with the breaking of bivariate record s.

\nThis is joint work with Daniel Q. Naiman.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13619@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Online Experimentation and Learning Algorithms in a Clin ical Trial\nAbstract: In this talk we describe two reinforcement learning algorithms we have implemented in a mobile health physical activity trial . These algorithms are designed to tackle two challenges faced by mobile h ealth. The first challenge is that while most treatments delivered by a mo bile device have immediate nonnegative (hopefully positive) effects\, long er term effects tend to be negative due to user burden. To address this fi rst challenge we add a low variance proxy for the delay effects to the rew ard (e.g. immediate response) in the learning algorithm. The second challe nge is that data on any one individual is very noisy making it difficult f or the algorithm to learn. To address this challenge we pool data across p articipants.\nBio: Susan Murphy is Professor of Statistics at Harvard Univ ersity\, Radcliffe Alumnae Professor at the Radcliffe Institute\, Harvard University\, and Professor of Computer Science at the Harvard John A. Paul son School of Engineering and Applied Sciences. Her lab works on clinical trial designs and learning algorithms for developing mobile health polici es. She is a 2013 MacArthur Fellow\, a member of the National Academy of Sciences and the National Academy of Medicine\, both of the US National A cademies. She is currently President of the Institute of Mathematical Sta tistics.\nSusan Murphy’s website is http://people.seas.harvard.edu/~samurp hy/ DTSTART;TZID=America/New_York:20191010T133000 DTEND;TZID=America/New_York:20191010T143000 SEQUENCE:0 SUMMARY:The Acheson J. Duncan Lecture Series: AMS Seminar: Susan Murphy (Ha rvard University) @ Maryland 110 URL:https://engineering.jhu.edu/ams/events/the-acheson-j-duncan-lecture-ser ies-ams-seminar-susan-murphy-harvard-university-maryland-110/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://engineering.jhu.edu/ams/wp-content/uploa ds/2019/08/Susan-Murphy-200x300.jpg\;200\;300\,medium\;https://engineering .jhu.edu/ams/wp-content/uploads/2019/08/Susan-Murphy-200x300.jpg\;200\;300 \,large\;https://engineering.jhu.edu/ams/wp-content/uploads/2019/08/Susan- Murphy-200x300.jpg\;200\;300\,full\;https://engineering.jhu.edu/ams/wp-con tent/uploads/2019/08/Susan-Murphy-200x300.jpg\;200\;300 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title:** Online Experimentation and
Learning Algorithms in a Clinical Trial

**Abstract:**
In this talk we describe two reinforcement learning algorithms we have i
mplemented in a mobile health physical activity trial. These algorithms ar
e designed to tackle two challenges faced by mobile health. The first chal
lenge is that while most treatments delivered by a mobile device have imme
diate nonnegative (hopefully positive) effects\, longer term effects tend
to be negative due to user burden. To address this first challenge we add
a low variance proxy for the delay effects to the reward (e.g. immediate r
esponse) in the learning algorithm. The second challenge is that data on a
ny one individual is very noisy making it difficult for the algorithm to l
earn. To address this challenge we pool data across participants.

**Bio: **Susan Murphy is Professor of Statistics at Harvard U
niversity\, Radcliffe Alumnae Professor at the Radcliffe Institute\, Harva
rd University\, and Professor of Computer Science at the Harvard John A. P
aulson School of Engineering and Applied Sciences. Her lab works on clini
cal trial designs and learning algorithms for developing mobile health pol
icies. She is a 2013 MacArthur Fellow\, a member of the National Academy
of Sciences and the National Academy of Medicine\, both of the US Nationa
l Academies. She is currently President of the Institute of Mathematical
Statistics.

Susan Murphy’s website is http://people.seas.harvard.edu/~samurphy/

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13644@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Latent variable models for biomarkers of Alzheimer’s dis ease\nAbstract: Accumulating evidence suggests that the initiation of Alz heimer’s disease (AD) pathogenic process precedes the first symptoms by a decade or more. The recognition of this decade-long asymptomatic stage has greatly impact AD research and therapeutic development to focus on the pr eclinical stage of AD pathogenic process\, at which time disease-modifying therapy is more likely to be effective. On the other hand\, the decade-lo ng preclinical stage imposes a major challenge in investigating biomarkers for early AD detection. The challenge is two-fold. Firstly\, the unobserv able disease status leads to challenges in evaluating the potential diagno stic capacity of AD biomarkers. Clinical diagnoses are often used in curre nt evaluation\, but they are known to be error-prone\, especially in the e arly course of the disease. In addition\, the clinical diagnosis is often based on or provide an unfair advantage to current standard tests. Therefo re\, they can mask the prognostic value of a useful biomarker\, especially when the biomarker is much more accurate than the standard tests. Since A D pathophysiology has been recognized as a multidimensional process that i nvolves amyloid deposition\, neurofibrillary tangles\, and neurodegenerati on among other aspects\, we proposed a latent variable model to study the underlying AD pathophysiology process revealed by multidimensional markers . Secondly\, the unobservable disease process also leads to challenges in understanding the ordering and shape of AD biomarker cascade\, which is cr itical for early detection and therapeutic development yet is still under great debate. I will present a work-in-progress that attempts to inform th is debate and outline a model that considers continuous latent disease pro gression. DTSTART;TZID=America/New_York:20191017T133000 DTEND;TZID=America/New_York:20191017T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Zheyu Wang (JHMI) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-13 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **Latent variable models for biomarkers of Alzheimer’s disease

**Abstract: **Accumulating evidence suggests that the
initiation of Alzheimer’s disease (AD) pathogenic process precedes the fir
st symptoms by a decade or more. The recognition of this decade-long asymp
tomatic stage has greatly impact AD research and therapeutic development t
o focus on the preclinical stage of AD pathogenic process\, at which time
disease-modifying therapy is more likely to be effective. On the other han
d\, the decade-long preclinical stage imposes a major challenge in investi
gating biomarkers for early AD detection. The challenge is two-fold. First
ly\, the unobservable disease status leads to challenges in evaluating the
potential diagnostic capacity of AD biomarkers. Clinical diagnoses are of
ten used in current evaluation\, but they are known to be error-prone\, es
pecially in the early course of the disease. In addition\, the clinical di
agnosis is often based on or provide an unfair advantage to current standa
rd tests. Therefore\, they can mask the prognostic value of a useful bioma
rker\, especially when the biomarker is much more accurate than the standa
rd tests. Since AD pathophysiology has been recognized as a multidimension
al process that involves amyloid deposition\, neurofibrillary tangles\, an
d neurodegeneration among other aspects\, we proposed a latent variable mo
del to study the underlying AD pathophysiology process revealed by multidi
mensional markers. Secondly\, the unobservable disease process also leads
to challenges in understanding the ordering and shape of AD biomarker casc
ade\, which is critical for early detection and therapeutic development ye
t is still under great debate. I will present a work-in-progress that atte
mpts to inform this debate and outline a model that considers continuous l
atent disease progression.

**Title
: **Optimal Oil Production and Taxation under Carbon Emission Const
raints.

**Abstract: **We study the optimal extractio
n policy of an oil field as well as the efficient taxation of the revenues
generated in light of various economic restrictions and constraints. Taki
ng into account the fact that the oil price in worldwide commodity markets
fluctuates randomly following global and seasonal macroeconomic parameter
s\, we model the evolution of the oil price as a mean reverting regime-swi
tching jump diffusion process. Moreover\, taking into account the fact tha
t oil producing countries rely on oil sale revenues as well as taxes levie
d on oil companies for a good portion of the revenue side of their budgets
\, we formulate this problem as a differential game where the two players
are the mining company whose aim is to maximize the revenues generated fro
m its extracting activities and the government agency in charge of regulat
ing and taxing natural resources. We prove the existence of a Nash equilib
rium and characterize the value functions of this stochastic differential
game as the unique viscosity solutions of the corresponding Hamilton Jacob
i Isaacs equations. Furthermore\, optimal extraction and fiscal policies t
hat should be applied when the equilibrium is reached are derived. A numer
ical example is presented to illustrate these results.

**Title
: **Optimal Transport-Based Distances for Metric Space Matching

**Abstract: **I will overview some methods for comparin
g datasets modeled as metric measure spaces (mm-spaces)\, which are compac
t metric spaces endowed with probability measures. The main tool is Gromov
-Wasserstein (GW) distance\, which provides a metric on the collection of
all mm-spaces. The definition of GW distance is inspired by ideas from opt
imal transport and it has fascinating connections to many other areas of m
athematics. I will discuss theoretical results on estimating GW distance u
sing distribution-valued invariants of mm-spaces as well as some work on t
he use of GW distance for practical applications in data science.

**Title
: **Optimal Singular Value Decomposition for High-dimensional High
-order Data

**Abstract: **High-dimensional high-orde
r data arise in many modern scientific applications including genomics\, b
rain imaging\, and social science. In this talk\, we consider the methods\
, theories\, and computations for tensor singular value decomposition (ten
sor SVD)\, which aims to extract the hidden low-rank structure from high-d
imensional high-order data. First\, comprehensive results are developed on
both the statistical and computational limits for tensor SVD under the ge
neral scenario. This problem exhibits three different phases according to
signal-noise-ratio (SNR)\, and the minimax-optimal statistical and/or comp
utational results are developed in each of the regimes. In addition\, we c
onsider the sparse tensor singular value decomposition which allows more r
obust estimation under sparsity structural assumptions. A novel sparse ten
sor alternating thresholding algorithm is proposed. Both the optimal theor
etical results and numerical analyses are provided to guarantee the perfor
mance of the proposed procedure.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13655@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Computational methods for Quantifying Gerrymandering and other computational statistical mechanics problems.\nAbstract: I will des cribe some of the interesting problems which have arisen around the proble m of understanding Gerrymandering. It is a high sampling dimensional probl em. I will talk about some basic MCMC schemes and some extensions to both interesting global moves as well as some other generalizations. I will als o take a moment to frame the problem and state some open questions\n DTSTART;TZID=America/New_York:20191107T133000 DTEND;TZID=America/New_York:20191107T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Jonathan Mattingly (Duke University) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-16 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Computational methods for Quantifying Gerrymandering and other
computational statistical mechanics problems.

**Abstract: I will describe some of the interesting problems which have arisen
around the problem of understanding Gerrymandering. It is a high sampling
dimensional problem. I will talk about some basic MCMC schemes and some ex
tensions to both interesting global moves as well as some other generaliza
tions. I will also take a moment to frame the problem and state some open
questions**

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14614@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Matroids and Optimum Branching Systems\nAbstract: The Op timum Branching Systems problem (OBS) is Given a directed graph G\, specif ied root nodes r(i)\, and a cost for each edge of G\, find a least cost co llection of edge-disjoint directed spanning trees in G\, rooted respective ly at the nodes r(i)\, i.e.\, r(i)-branchings in G. We describe a polynomi al time algorithm for OBS. The mincost network flow problem is a special c ase of OBS. However OBS does not reduce to it. By letting M1 and M2 be cer tain matroids\, matroid intersection solves OBS\, and a bunch of other com binatorial optimization problems. Matroid intersection is the only approac h known for solving OBS. Curiously\, the simplest way known to describe an algorithm for OBS is by matroid intersection for general matroids. Matroi d algorithms use subroutine sources of the matroids\, M\, which say when a set is independent in M. The ingredients for solving OBS are in books on combinatorial optimization\, but I’m still trying get people to do a good computer implementation. It’s clearly possible\, but a bit complicated.\nB io: Jack Edmonds is a John von Neumann Theory Prize recipient and one of t he creators of the field of combinatorial optimization and polyhedral comb inatorics. His 1965 paper “Paths\, Trees and Flowers” was one of the first papers to suggest the possibility of establishing a mathematical theory o f efficient combinatorial algorithms. In that paper and in the subsequent paper “Maximum Matching and a Polyhedron with 0-1 Vertices” Edmonds gave r emarkable polynomial-time algorithms for the construction of maximum match ings. Even more importantly these papers showed how a good characterizatio n of the polyhedron associated with a combinatorial optimization problem c ould lead\, via the duality theory of linear programming\, to the construc tion of an efficient algorithm for the solution of that problem. In 2014 h e was honored as a Distinguished Scientist and inducted into the National Institute of Standards and Technology’s Gallery for his “fundamental contr ibutions in combinatorial optimization\, discrete mathematics\, and the th eory of computing.”\n \nedmonds_goldman – slides DTSTART;TZID=America/New_York:20191112T133000 DTEND;TZID=America/New_York:20191112T143000 SEQUENCE:0 SUMMARY:The Goldman Distinguished Lecture Series: Jack Edmonds @ Bloomberg 272 URL:https://engineering.jhu.edu/ams/events/the-goldman-distinguished-lectur e-series-jack-edmonds-bloomberg-272/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://engineering.jhu.edu/ams/wp-content/uploa ds/2019/10/Jack-Edmonds-220x300.jpg\;220\;300\,medium\;https://engineering .jhu.edu/ams/wp-content/uploads/2019/10/Jack-Edmonds-220x300.jpg\;220\;300 \,large\;https://engineering.jhu.edu/ams/wp-content/uploads/2019/10/Jack-E dmonds-220x300.jpg\;220\;300\,full\;https://engineering.jhu.edu/ams/wp-con tent/uploads/2019/10/Jack-Edmonds-220x300.jpg\;220\;300 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title:** Matroids and Optimum Branch
ing Systems

**Abstract:** The Optimum Branching Syste
ms problem (OBS) is Given a directed graph G\, specified root nodes r(i)\,
and a cost for each edge of G\, find a least cost collection of edge-disj
oint directed spanning trees in G\, rooted respectively at the nodes r(i)\
, i.e.\, r(i)-branchings in G. We describe a polynomial time algorithm for
OBS. The mincost network flow problem is a special case of OBS. However O
BS does not reduce to it. By letting M1 and M2 be certain matroids\, matro
id intersection solves OBS\, and a bunch of other combinatorial optimizati
on problems. Matroid intersection is the only approach known for solving O
BS. Curiously\, the simplest way known to describe an algorithm for OBS is
by matroid intersection for general matroids. Matroid algorithms use subr
outine sources of the matroids\, M\, which say when a set is independent i
n M. The ingredients for solving OBS are in books on combinatorial optimiz
ation\, but I’m still trying get people to do a good computer implementati
on. It’s clearly possible\, but a bit complicated.

**Bio: Jack Edmonds is a John von Neumann Theory Prize recipient and one o
f the creators of the field of combinatorial optimization and polyhedral c
ombinatorics. His 1965 paper “ Paths\, Trees and Flowers” was one
of the first papers to suggest the possibility of establishing a mathemati
cal theory of efficient combinatorial algorithms. In that paper and in the
subsequent paper “Maximum Matching and a Polyhedron with 0-1
Vertices” Edmonds gave remarkable polynomial-time algorithms for the
construction of maximum matchings. Even more importantly these papers sho
wed how a good characterization of the polyhedron associated with a combin
atorial optimization problem could lead\, via the duality theory of linear
programming\, to the construction of an efficient algorithm for the solut
ion of that problem. In 2014 he was honored as a Distinguished Scientist a
nd inducted into the National Institute of Standards and Technology’s Gall
ery for his “fundamental contributions in combinatorial optimization\,
discrete mathematics\, and the theory of computing.”**

edmonds_goldman – **slides**

**Title
: **Matrix Means and a Novel High-Dimensional Shrinkage Phenomenon<
/p>\n

**Abstract: **Many statistical settings call for esti
mating a population parameter\, most typically the population mean\, from
a sample of matrices. The most natural estimate of the population mean is
the arithmetic mean\, but there are many other matrix means that may behav
e differently\, especially in high dimensions. Here we consider the matrix
harmonic mean as an alternative to the arithmetic matrix mean. We show th
at in certain high-dimensional regimes\, the harmonic mean yields an impro
vement over the arithmetic mean in estimation error as measured by the ope
rator norm. Counter-intuitively\, studying the asymptotic behavior of thes
e two matrix means in a spiked covariance estimation problem\, we find tha
t this improvement in operator norm error does not imply better recovery o
f the leading eigenvector. We also show that a Rao-Blackwellized version o
f the harmonic mean is equivalent to a linear shrinkage estimator that has
been studied previously in the high-dimensional covariance estimation lit
erature. Simulations complement the theoretical results\, illustrating the
conditions under which the harmonic matrix mean yields an empirically bet
ter estimate.

** <
a href='https://engineering.jhu.edu/ams/wp-content/uploads/2019/08/jian-li
u.jpg'> **

**Title:
**Spatial-temporal modeling of mechanochemistry of cellular process
es

**Abstract: **The central theme of Dr. Jian Liu’s
research is to understand how mechanical actions feedback to biochemical p
athways in cellular processes\, and how such mechanochemical crosstalk amo
ng key cellular players governs spatial-temporal regulation and shapes cel
l functions. He confronts these challenges by bringing together theoretica
l and computational studies\, rooted in statistical mechanics\, with a div
ersity of biological experiments. His seminar will provide an overview of
his current research interest – spanning the fields of cell migration\, ce
ll division\, and membrane trafficking – with a particular emphasis on mem
brane shape-mediated excitability in cellular processes.

**B
iography**: Dr. Jian Liu graduated from Peking University with a B.
S. in chemistry in 2000 and earned his Ph.D. in theoretical chemistry from
the University of California\, Berkeley in 2005. He completed postdoctora
l fellowships at the University of California\, San Diego\, Center for The
oretical Biological Physics from 2005 to 2007 and at the University of Cal
ifornia\, Berkeley\, Department of Molecular and Cell Biology in the labor
atory of George Oster from 2007 to 2009. Dr. Liu joined the NHLBI as a pri
ncipal Investigator in 2010. Dr. Liu takes a distinct approach to theoreti
cal biology\, treating cellular systems as discrete functional modules com
prising a set of critical players. This allows both simplification and ret
ention of essential biological features. The larger goal of this modular a
pproach is to allow for processes to be combined at a theoretical level to
reveal the interplay among them in the cell as a whole.

\n

\n

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13661@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Wave propagation in inhomogeneous media: An introduction to Generalized Plane Waves\nAbstract: Trefftz methods rely\, in broad ter ms\, on the idea of approximating solutions to PDEs using basis functions which are exact solutions of the Partial Differential Equation (PDE)\, mak ing explicit use of information about the ambient medium. But wave propaga tion problems in inhomogeneous media is modeled by PDEs with variable coef ficients\, and in general no exact solutions are available. Generalized Pl ane Waves (GPWs) are functions that have been introduced\, in the case of the Helmholtz equation with variable coefficients\, to address this proble m: they are not exact solutions to the PDE but are instead constructed loc ally as high order approximate solutions. We will discuss the origin\, the construction\, and the properties of GPWs. The construction process intro duces a consistency error\, requiring a specific analysis. DTSTART;TZID=America/New_York:20191205T133000 DTEND;TZID=America/New_York:20191205T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Lise-Marie I.G. (University of Maryland) @ Whitehead 3 04 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-19 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Wave propagation in inhomogeneous media: An introduction to Gen
eralized Plane Waves

**Abstract: **Trefftz methods re
ly\, in broad terms\, on the idea of approximating solutions to PDEs using
basis functions which are exact solutions of the Partial Differential Equ
ation (PDE)\, making explicit use of information about the ambient medium.
But wave propagation problems in inhomogeneous media is modeled by PDEs w
ith variable coefficients\, and in general no exact solutions are availabl
e. Generalized Plane Waves (GPWs) are functions that have been introduced\
, in the case of the Helmholtz equation with variable coefficients\, to ad
dress this problem: they are not exact solutions to the PDE but are instea
d constructed locally as high order approximate solutions. We will discuss
the origin\, the construction\, and the properties of GPWs. The construct
ion process introduces a consistency error\, requiring a specific analysis
.

**Title
:** “Measures in geometry: a look at two cases of fruitful interact
ion.”

**Abstract:** “This talk will explore the use o
f measures as a convenient way to represent and analyze the shape of objec
ts\, mathematically and/or numerically. We will specifically focus on two
particular examples of such interactions.

In the first part of the talk\, I will introduce\, in a simple setting\, the so called length meas ures associated to planar closed curves. Although the length measure do no t fully characterize the underlying shape\, the celebrated Minkowski-Fench el-Jessen theorem shows that this is the case when restricting to the subc lass of convex shapes. This equivalence has been key to the derivation of many important results in the field of convex geometry. We will mention so me of these such as isoperimetric inequalities and discuss several open qu estions related to length measures for non-convex shapes.

\nIn the s econd part of the talk\, I will present another class of measures called v arifolds\, which allow this time to represent injectively any shape such a s curves\, surfaces or even submanifolds of any dimension. I will then exa mine the construction of numerically tractable metrics based on varifolds which can be used to formulate and tackle various problems in shape analys is. We will focus in particular on the problems of compression (or quantiz ation) of varifolds and of diffeomorphic registration between two shapes.”

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14883@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Cancer is a disease of epigenetic stochasticity\nAbstrac t: I proposed in 2006 (Nat Rev Genet) that increased epigenetic stochastic ity is a driving force of tumor progression from its origin to metastasis\ , and would allow rapid selection for tumor cell survival at the expense o f the host. This model puts epigenetic instability at the heart of tumor p rogression and is the primary target of cancer mutations. Several recent o bservations from the laboratory confirm the model\, and establish mechanis ms including disruption of the epigenome that involving blocks of DNA hypo methylation and heterochromatin\, and metabolic changes involving the oxid ative branch of the pentose phosphate pathway. We have recently developed mathematically rigorous Gibbs-Boltzmann-style epigenetic landscapes incorp orating stochasticity and shown its relationship to entropy in information theory. Recent data shows that this approach identifies epigenetic and ge netic drivers of cancer\, using acute lymphoblastic leukemia as a model\, as well as the close relationship between entropy in cancer and entropy in stem cell reprogramming.\n DTSTART;TZID=America/New_York:20200206T133000 DTEND;TZID=America/New_York:20200206T143000 SEQUENCE:0 SUMMARY:AMS Weekly Seminar: Andy Feinberg (Bloomberg Distinguished Professo r\, Schools of Medicine\, Engineering\, and Public Health\, Johns Hopkins University) @ Whitehead 304) URL:https://engineering.jhu.edu/ams/events/ams-weekly-seminar-andy-feinberg -jhu-som-dom-molecular-medicine/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Cancer is a disease of epigenetic stochasticity

**Abstract**: I proposed in 2006 (Nat Rev Genet) that increased epig
enetic stochasticity is a driving force of tumor progression from its orig
in to metastasis\, and would allow rapid selection for tumor cell survival
at the expense of the host. This model puts epigenetic instability at the
heart of tumor progression and is the primary target of cancer mutations.
Several recent observations from the laboratory confirm the model\, and e
stablish mechanisms including disruption of the epigenome that involving b
locks of DNA hypomethylation and heterochromatin\, and metabolic changes i
nvolving the oxidative branch of the pentose phosphate pathway. We have re
cently developed mathematically rigorous Gibbs-Boltzmann-style epigenetic
landscapes incorporating stochasticity and shown its relationship to entro
py in information theory. Recent data shows that this approach identifies
epigenetic *and genetic* drivers of cancer\, using acute lymphoblas
tic leukemia as a model\, as well as the close relationship between entrop
y in cancer and entropy in stem cell reprogramming.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14972@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Inverse semigroup theory of cutting planes for integer l inear optimization\nAbstract: MIP practitioners can solve large-scale mixe d integer optimizationproblems to optimality or near-optimality by compete nt modelizationand use of branch-and-cut solvers. This technology was ena bled to alarge part by the revival of Gomory’s classic general-purpose cut tingplanes such as the Gomory mixed integer cut. In the theory of such gen eral-purpose cutting planes (validinequalities) the traditional\, finite-d imensional techniques ofpolyhedral combinatorics are complemented by infin ite-dimensionalmethods\, the study of cut-generating functions. In my talk I will introduce the classic Gomory-Johnson model\, auniversal relaxation of integer programs in the form of a singleconstraint in infinitely many nonnegative integer variables. Thenondominated valid inequalities (cut-ge nerating functions) for thismodel\, “minimal functions”\, are characterize d by functionalinequalities such as subadditivity. Given a minimal functio n\, we are interested in finding improvingdirections that lead to stronger cuts and eventually to “extremefunctions”\, which cannot be strengthened further — an analogue offacet-defining inequalities. I will present an inv erse semigroup theory for minimal functions\,which enables us to obtain a complete description of the space of”improving directions” (perturbations) of a minimal function. This isjoint work with Robert Hildebrand and Yuan Zhou\, which appeared inIPCO 2019\; a full paper is available at https:// arxiv.org/abs/1811.06189 . DTSTART;TZID=America/New_York:20200210T120000 DTEND;TZID=America/New_York:20200210T130000 SEQUENCE:0 SUMMARY:Optimization Seminar: Prof. Matthias Koeppe (University of Californ ia\, Davis) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/optimization-seminar-prof-matthi as-koeppe-university-of-california-davis-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Inverse semigroup theory of cutting planes for integer linear o
ptimization

**Abstract:** MIP practitioners can solve
large-scale mixed integer optimizationproblems to optimality or near-opti
mality by competent modelizationand use of branch-and-cut solvers. This t
echnology was enabled to alarge part by the revival of Gomory’s classic ge
neral-purpose cuttingplanes such as the Gomory mixed integer cut. In the t
heory of such general-purpose cutting planes (validinequalities) the tradi
tional\, finite-dimensional techniques ofpolyhedral combinatorics are comp
lemented by infinite-dimensionalmethods\, the study of cut-generating func
tions. In my talk I will introduce the classic Gomory-Johnson model\, auni
versal relaxation of integer programs in the form of a singleconstraint in
infinitely many nonnegative integer variables. Thenondominated valid ine
qualities (cut-generating functions) for thismodel\, “minimal functions”\,
are characterized by functionalinequalities such as subadditivity. Given
a minimal function\, we are interested in finding improvingdirections that
lead to stronger cuts and eventually to “extremefunctions”\, which cannot
be strengthened further — an analogue offacet-defining inequalities. I wi
ll present an inverse semigroup theory for minimal functions\,which enable
s us to obtain a complete description of the space of”improving directions
” (perturbations) of a minimal function. This isjoint work with Robert Hi
ldebrand and Yuan Zhou\, which appeared inIPCO 2019\; a full paper is avai
lable at https://arxiv.org/abs/
1811.06189 .

**Title
: **Edge-Selection Priors for Graphical Models and Applications to
Complex Biological Data

**Abstract: **There is now a
huge literature on Bayesian methods for variable selection in linear model
s that use spike-and-slab priors.

Such methods\, in particular\, h ave been quite successful for applications in a variety of different field s. A parallel methodological development has happened in graphical models\ , where priors are specified on precision matrices. In this talk I will de scribe priors for edge selection for the estimation of multiple graphs tha t may share common features\, such as presence/absence of edges or strengt hs of connections. I will also describe modeling frameworks for non-Gaussi an data and discuss computational challenges.

\nI will motivate the development of the models using specific applications from neuroimaging an d from studies that use biomedical data.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14931@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: A scaling-invariant algorithm for linear programming who se running time depends only on the constraint matrix\nAbstract: Following the breakthrough work of Tardos in the bit-complexity model\, Vavasis and Ye gave the first exact algorithm for linear programming in the real mode l of computation with running time depending only on the constraint matrix . For solving a linear program (LP) max cx\,Ax=b\,x>=0\,A in R^{mxn}\, Vav asis and Ye developed a primal-dual interior point method using a ‘layered least squares’ (LLS) step\, and showed that O(n^3.5log(chi(A))) iteration s suffice to solve (LP) exactly\, where chi(A) is a condition measure cont rolling the size of solutions to linear systems related to A.\nMonteiro an d Tsuchiya\, noting that the central path is invariant under rescalings of the columns of A and c\, asked whether there exists an LP algorithm depen ding instead on the measure chi*(A)\, defined as the minimum chi(AD) value achievable by a column rescaling AD of A\, and gave strong evidence that this should be the case. We resolve this open question affirmatively.\nOur first main contribution is an O(m^2n^2+n^3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying chi(AD)≤n(chi*(A))3. This algorithm also allows us to approxima te the value of chi(A) up to a factor n(chi*(A))2. As our second main cont ribution\, we develop a scaling invariant LLS algorithm\, together with a refined potential function based analysis for LLS algorithms in general. W ith this analysis\, we derive an improved O(n^{2.5}logn log(chi*(A))) iter ation bound for optimally solving (LP) using our algorithm. The same argum ent also yields a factor n/logn improvement on the iteration complexity bo und of the original Vavasis-Ye algorithm.\n> DTSTART;TZID=America/New_York:20200227T133000 DTEND;TZID=America/New_York:20200227T143000 SEQUENCE:0 SUMMARY:AMS Seminar: Daniel Dadush (Centrum Wiskunde & Informatica (CWI)\, Netherlands) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-8/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **A scaling-invariant algorithm for linear programming whose runn
ing time depends only on the constraint matrix

**Abstract: <
/strong>Following the breakthrough work of Tardos in the bit-complexity mo
del\, Vavasis and Ye gave the first exact algorithm for linear programming
in the real model of computation with running time depending only on the
constraint matrix. For solving a linear program (LP) max cx\,Ax=b\,x>=0\,A
in R^{mxn}\, Vavasis and Ye developed a primal-dual interior point method
using a ‘layered least squares’ (LLS) step\, and showed that O(n^3.5log(c
hi(A))) iterations suffice to solve (LP) exactly\, where chi(A) is a condi
tion measure controlling the size of solutions to linear systems related t
o A.**

Monteiro and Tsuchiya\, noting that the central path is invar iant under rescalings of the columns of A and c\, asked whether there exis ts an LP algorithm depending instead on the measure chi*(A)\, defined as t he minimum chi(AD) value achievable by a column rescaling AD of A\, and ga ve strong evidence that this should be the case. We resolve this open ques tion affirmatively.

\nOur first main contribution is an O(m^2n^2+n^3 ) time algorithm which works on the linear matroid of A to compute a nearl y optimal diagonal rescaling D satisfying chi(AD)≤n(chi*(A))3. This algori thm also allows us to approximate the value of chi(A) up to a factor n(chi *(A))2. As our second main contribution\, we develop a scaling invariant L LS algorithm\, together with a refined potential function based analysis f or LLS algorithms in general. With this analysis\, we derive an improved O (n^{2.5}logn log(chi*(A))) iteration bound for optimally solving (LP) usin g our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.\n

>

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14934@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Air Pollution Accountability Studies: Lesson Learned and Future Opportunities\nAbstract: Policy makers seek tools to quantify the net health benefits of improved air quality or of proposed air quality reg ulations. The most commonly applied approach is risk assessment\, that is\ , estimating health benefits from expected or observed air quality changes by extrapolating exposure response functions from existing epidemiologic studies. Accountability studies attempt to validate these assessments base d on empirical evidence of the effects on air pollution and health of regu latory actions\, interventions\, or “natural” experiments. Accountability studies are appealing in that they are the closest epidemiologic equivalen t to controlled experimental studies\, and thus may provide evidence for c ausal relationships. Nevertheless\, accountability studies must disentangl e policy-related changes in air pollution and health from other time-varyi ng factors influencing air pollution and/or health. We will examine the ra nge of study designs used in accountability studies and the challenges fac ed in these studies.\nBio: Dr. Douglas W. Dockery is the John L. Loeb and Frances Lehman Research Professor of Environmental Epidemiology in the Dep artments of Environmental Health and of Epidemiology at the Harvard TH Cha n School of Public Health. He was Chair of the Department of Environmental Health (2005-2016) and Director of the Harvard-National Institute of Envi ronmental Health Sciences (NIEHS) Center for Environmental Health Sciences (2008-2019). He received a B.S. in physics from the University of Marylan d\, an M.S. in meteorology from the Massachusetts Institute of Technology\ , and a ScD in environmental health from the Harvard School of Public Heal th. Dr. Dockery has been studying air pollution exposures and their health effects for more than four decades. He served as Principal Investigator o f the Harvard Six Cities Study of the Respiratory Health Effects of Respir able Particles and Sulfur Oxides. His recent work includes assessment of t he health benefits of air pollution controls. Dr. Dockery has published ov er two hundred peer-reviewed articles. His 1993 New England Journal of Med icine paper on air pollution and mortality in the Harvard Six Cities study is the single most cited air pollution paper. In 1998\, the International Society of Environmental Epidemiology honored with the first John Goldsmi th Award from for Outstanding Contributions to the field. DTSTART;TZID=America/New_York:20200305T133000 DTEND;TZID=America/New_York:20200305T143000 SEQUENCE:0 SUMMARY:The John C. & Susan S.G. Wierman Lecture Series- AMS Seminar: Doug Dockery (Harvard University) @ Whitehead 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-doug-dockery-harvard -university-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **Air Pollution Accountability Studies: Lesson Learned and Future
Opportunities

**Abstract: **Policy makers seek tools
to quantify the net health benefits of improved air quality or of propose
d air quality regulations. The most commonly applied approach is risk asse
ssment\, that is\, estimating health benefits from expected or observed ai
r quality changes by extrapolating exposure response functions from existi
ng epidemiologic studies. Accountability studies attempt to validate these
assessments based on empirical evidence of the effects on air pollution a
nd health of regulatory actions\, interventions\, or “natural” experiments
. Accountability studies are appealing in that they are the closest epidem
iologic equivalent to controlled experimental studies\, and thus may provi
de evidence for causal relationships. Nevertheless\, accountability studie
s must disentangle policy-related changes in air pollution and health from
other time-varying factors influencing air pollution and/or health. We wi
ll examine the range of study designs used in accountability studies and t
he challenges faced in these studies.

**Bio: **Dr. Do
uglas W. Dockery is the John L. Loeb and Frances Lehman Research Professor
of Environmental Epidemiology in the Departments of Environmental Health
and of Epidemiology at the Harvard TH Chan School of Public Health. He was
Chair of the Department of Environmental Health (2005-2016) and Director
of the Harvard-National Institute of Environmental Health Sciences (NIEHS)
Center for Environmental Health Sciences (2008-2019). He received a B.S.
in physics from the University of Maryland\, an M.S. in meteorology from t
he Massachusetts Institute of Technology\, and a ScD in environmental heal
th from the Harvard School of Public Health. Dr. Dockery has been studying
air pollution exposures and their health effects for more than four decad
es. He served as Principal Investigator of the Harvard Six Cities Study of
the Respiratory Health Effects of Respirable Particles and Sulfur Oxides.
His recent work includes assessment of the health benefits of air polluti
on controls. Dr. Dockery has published over two hundred peer-reviewed arti
cles. His 1993 New England Journal of Medicine paper on air pollution and
mortality in the Harvard Six Cities study is the single most cited air pol
lution paper. In 1998\, the International Society of Environmental Epidemi
ology honored with the first John Goldsmith Award from for Outstanding Con
tributions to the field.

**Title
: **Emergent Behavior in Collective Dynamics

**Abstra
ct:** A fascinating aspect of collective dynamics is the self-organ
ization of small-scales

and their emergence as higher-order patter ns — clusters\, flocks\, tissues\, parties.

\nThe emergence of diffe rent patterns can be described in terms of few fundamental “rules of inter actions”.

\nI will discuss recent results of the large-time\, large- crowd dynamics\, driven by anticipation that tend to align the crowd\,

\nwhile other pairwise interactions keep the crowd together and prevent over-crowding.

\nIn particular\, I address the question how short-r ange interactions lead to the emergence of long-range patterns\,

\nc omparing different rules of interactions based on geometric vs. topologica l neighborhoods.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14979@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: TBA\n \nAbstract: TBA DTSTART;TZID=America/New_York:20200326T133000 DTEND;TZID=America/New_York:20200326T143000 SEQUENCE:0 SUMMARY:Cancelled- AMS Seminar: Kavita Ramanan (Brown University) @ Whitehe ad 304 URL:https://engineering.jhu.edu/ams/events/ams-seminar-kavita-ramanan-brown -university-whitehead-304/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** TBA

\n

**Abstract:** TBA

**Title
:** A Geometric Understanding of Deep Learning

**Abst
ract:** This work introduces an optimal transportation (OT) view of
generative adversarial networks (GANs). Natural datasets have intrinsic p
atterns\, which can be summarized as the manifold distribution principle:
the distribution of a class of data is close to a low-dimensional manifold
. GANs mainly accomplish two tasks: manifold learning and probability dist
ribution transformation. The latter can be carried out using the classical
OT method. From the OT perspective\, the generator computes the OT map\,
while the discriminator computes the Wasserstein distance between the gene
rated data distribution and the real data distribution\; both can be reduc
ed to a convex geometric optimization process. Furthermore\, OT theory dis
covers the intrinsic collaborative—instead of competitive—relation between
the generator and the discriminator\, and the fundamental reason for mode
collapse. We also propose a novel generative model\, which uses an autoen
coder (AE) for manifold learning and OT map for probability distribution t
ransformation. This AE–OT model improves the theoretical rigor and transpa
rency\, as well as the computational stability and efficiency\; in particu
lar\, it eliminates the mode collapse. The experimental results validate o
ur hypothesis\, and demonstrate the advantages of our proposed model.

**Title
: **Complexity of cutting plane and branch-and-bound algorithms

**Abstract: **We present some results on the theoretical
complexity of branch-and-bound (BB) and cutting plane (CP) algorithms for
integer programming (linear and nonlinear). We will first give an exposit
ion of connections between these ideas and problems in mathematical logic
and proof theory. We will then present recent results that shed some new l
ight on the efficiency of these two methods\, with quantitative upper and
lower bounds on the power of these methods.

The second part of the talk will be based on work done in collaboration with Hongyi Jiang\, an A MS Ph.D. student\, and Marco Di Summa and Michele Conforti at the Universi ty of Padova.

\n\n

Topic: AMS Weekly Seminar

\nTime: Apr
9\, 2020 01:30 PM Eastern Time (US and Canada)

Join Zoom Meeting< br />\nhttps://wse.zoom.us/j/907 100613

\nMeeting ID: 907 100 613

\nOne tap mobile

\n+
16465588656\,\,907100613# US (New York)

\n+13126266799\,\,907100613#
US (Chicago)

Dial by your location

\n+1 646 558 8656 US (New
York)

\n+1 312 626 6799 US (Chicago)

\n+1 669 900 6833 US (San J
ose)

\n+1 253 215 8782 US

\n+1 301 715 8592 US

\n+1 346 248
7799 US (Houston)

\nMeeting ID: 907 100 613

\nFind your local n
umber: https://wse.zoom.us/u/acgRGEZiLc

Join by SIP

\n9071006
13@zoomcrc.com

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14992@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Decoding When and How to Treat Patients using a Bayesian Probabilistic Reinforcement Learning Approach\nAbstract: Patients that un dergo kidney transplantation are at risk for a number of complications and graft rejection after surgery\, which could lead to death. In order to pr event graft rejection\, immuno-suppressive therapy such as tacrolimus is a dministered to patients post-surgery. The patients are monitored over time with repeated follow-up records (e.g.\, tacrolimus blood levels\, creatin ine levels\, BMI) after transplantation and the dosage levels of the immun osuppresive drugs can be adjusted by the clinician. Based on patients’ bas eline information and the followup data\, we develop a Bayesian probabili stic reinforcement learning framework to construct an optimal longitudinal treatment strategy for each individual by combining a longitudinal model for patients’ creatinine levels\, a survival model with the endpoint being patient death or graft failure\, and a marked point process for clinical decisions (how often the patient is instructed to followup\, and drug dosa ge adjustments). Our method shows promising performance on a real kidney transplantation dataset.\n \n********************************************* ****************\nTopic: AMS Weekly Seminar\nTime: Apr 16\, 2020 01:30 PM Eastern Time (US and Canada)\nJoin Zoom Meeting\nhttps://wse.zoom.us/j/907 100613\nMeeting ID: 907 100 613\nOne tap mobile\n+16465588656\,\,907100613 # US (New York)\n+13126266799\,\,907100613# US (Chicago)\nDial by your loc ation\n+1 646 558 8656 US (New York)\n+1 312 626 6799 US (Chicago)\n+1 669 900 6833 US (San Jose)\n+1 253 215 8782 US\n+1 301 715 8592 US\n+1 346 24 8 7799 US (Houston)\nMeeting ID: 907 100 613\nFind your local number: http s://wse.zoom.us/u/acgRGEZiLc\nJoin by SIP\n907100613@zoomcrc.com\n DTSTART;TZID=America/New_York:20200416T133000 DTEND;TZID=America/New_York:20200416T143000 SEQUENCE:0 SUMMARY:AMS Weekly Seminar w/ Yanxun Xu on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-21 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Decoding When and How to Treat Patients using a Bayesian Probab
ilistic Reinforcement Learning Approach

**Abstract: **Patients that undergo kidney transplantation are at risk for a number of
complications and graft rejection after surgery\, which could lead to deat
h. In order to prevent graft rejection\, immuno-suppressive therapy such a
s tacrolimus is administered to patients post-surgery. The patients are mo
nitored over time with repeated follow-up records (e.g.\, tacrolimus blood
levels\, creatinine levels\, BMI) after transplantation and the dosage le
vels of the immunosuppresive drugs can be adjusted by the clinician. Based
on patients’ baseline information and the followup data\, we develop a B
ayesian probabilistic reinforcement learning framework to construct an opt
imal longitudinal treatment strategy for each individual by combining a lo
ngitudinal model for patients’ creatinine levels\, a survival model with t
he endpoint being patient death or graft failure\, and a marked point proc
ess for clinical decisions (how often the patient is instructed to followu
p\, and drug dosage adjustments). Our method shows promising performance
on a real kidney transplantation dataset.

\n

************** ***********************************************

\nTopic: AMS Weekly
Seminar

\nTime: Apr 16\, 2020 01:30 PM Eastern Time (US and Canada)\n

Join Zoom Meeting

\n
https://wse.zoom.us/j/907100613

Meeting ID: 907 100 613

\n< p>One tap mobile\n+16465588656\,\,907100613# US (New York)

\n+1 3126266799\,\,907100613# US (Chicago)\n

Dial by your location

\n+1 646 558 8656 US (New York)

\n+1 312 626 6799 US (Chicago)

\n+1 669 900 6833 US (San Jose)

\n+1 253 215 8782 US

\n+1 301 71
5 8592 US

\n+1 346 248 7799 US (Houston)

\nMeeting ID: 907 100 6
13

\nFind your local number: https://wse.zoom.us/u/acgRGEZiLc

Join by SIP

\n907100613@zoomcrc.com

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14999@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Inference for multiple heterogeneous networks with a co mmon invariant subspace\nAbstract: The development of models for multiple heterogeneous network data is of critical importance both in statistical n etwork theory and across multiple application domains. Although single-gra ph inference is well-studied\, multiple graph inference is largely unexplo red\, in part because of the challenges inherent in appropriately modeling graph differences and yet retaining sufficient model simplicity to render estimation feasible. The common subspace independent-edge (COSIE) multipl e random graph model addresses this gap\, by describing a heterogeneous co llection of networks with a shared latent structure on the vertices but po tentially different connectivity patterns for each graph. The COSIE model is both flexible to account for important graph differences and tractable to allow for accurate spectral inference. In both simulated and real data\ , the model can be deployed for a number of subsequent network inference t asks\, including dimensionality reduction\, classification\, hypothesis te sting\, and community detection.\n**Topic: AMS Weekly Seminar**\nTopic: AM S Weekly Seminar- 4/23\nTime: Apr 23\, 2020 01:30 PM Eastern Time (US and Canada)\nJoin Zoom Meeting\nhttps://wse.zoom.us/j/93141026679\nMeeting ID: 931 4102 6679\nOne tap mobile\n+16465588656\,\,93141026679# US (New York) \n+13126266799\,\,93141026679# US (Chicago)\n \nDial by your location\n+1 646 558 8656 US (New York)\n+1 312 626 6799 US (Chicago)\n+1 301 715 8592 US\n+1 346 248 7799 US (Houston)\n+1 669 900 6833 US (San Jose)\n+1 253 21 5 8782 US\nMeeting ID: 931 4102 6679\nFind your local number: https://wse. zoom.us/u/acgRGEZiLc\nJoin by SIP\n93141026679@zoomcrc.com\nJoin by H.323 \n162.255.37.11 (US West)\n162.255.36.11 (US East)\n221.122.88.195 (China) \n115.114.131.7 (India Mumbai)\n115.114.115.7 (India Hyderabad)\n213.19.14 4.110 (EMEA)\n103.122.166.55 (Australia)\n209.9.211.110 (Hong Kong\nChina) \n64.211.144.160 (Brazil)\n69.174.57.160 (Canada)\n207.226.132.110 (Japan) \nMeeting ID: 931 4102 6679 DTSTART;TZID=America/New_York:20200423T133000 DTEND;TZID=America/New_York:20200423T143000 SEQUENCE:0 SUMMARY:AMS Weekly Seminar w/ Jesus Arroyo on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-tba-whitehead-304-22 / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Inference for multiple heterogeneous networks with a common in
variant subspace

**Abstract: **The development of mod
els for multiple heterogeneous network data is of critical importance both
in statistical network theory and across multiple application domains. Al
though single-graph inference is well-studied\, multiple graph inference i
s largely unexplored\, in part because of the challenges inherent in appro
priately modeling graph differences and yet retaining sufficient model sim
plicity to render estimation feasible. The common subspace independent-edg
e (COSIE) multiple random graph model addresses this gap\, by describing a
heterogeneous collection of networks with a shared latent structure on th
e vertices but potentially different connectivity patterns for each graph.
The COSIE model is both flexible to account for important graph differenc
es and tractable to allow for accurate spectral inference. In both simulat
ed and real data\, the model can be deployed for a number of subsequent ne
twork inference tasks\, including dimensionality reduction\, classificatio
n\, hypothesis testing\, and community detection.

**Topic: AMS Wee kly Seminar**

\nTopic: AMS Weekly Seminar- 4/23

\nTime: Apr 23 \, 2020 01:30 PM Eastern Time (US and Canada)

\nJoin Zoom Meeting

\nhttps://wse.zoom.us/j/93 141026679

\nMeeting ID: 931 4102 6679

\nOne tap mobile

\n+16465588656\,\,93141026679# US (New York)

\n+13126266799\,\,93 141026679# US (Chicago)

\n\n

Dial by your location

\n+1 646 558 8656 US (New York)

\n+1 312 626 6799 US (Chicago)

\n+ 1 301 715 8592 US

\n+1 346 248 7799 US (Houston)

\n+1 669 900 6833 US (San Jose)

\n+1 253 215 8782 US

\nMeeting ID: 931 4102 6679

\nFind your local number: https://wse.zoom.us/u/acgRGEZiLc

\nJoin by SIP

\n\nJoin by H.323

\n162.255.37.11 (US West)

\n162.255.36.11 (US E ast)

\n221.122.88.195 (China)

\n115.114.131.7 (India Mumbai)\n

115.114.115.7 (India Hyderabad)

\n213.19.144.110 (EMEA)

\n103.122.166.55 (Australia)

\n209.9.211.110 (Hong Kong

\nChi na)

\n64.211.144.160 (Brazil)

\n69.174.57.160 (Canada)

\n207.226.132.110 (Japan)

\nMeeting ID: 931 4102 6679

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-27529@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Circadian Event Streams: Initial Models for Prediction and Control\nAbstract: Our daily habits can have long-term health consequ ences. We are evaluating a simple idea for health: eating within a shorte r interval of time (for example 6-8 hours) each day rather than the often found 10-14 hours of eating. To evaluate this idea\, with funding from th e AHA\, we have developed an App (Daily24) and have collected daily times for meals and sleep from more than 500 individuals over six months of time . This event data can be interpreted as repeated samples from each indivi duals circadian day\, as an event stream. If multiple days (like the movi e Groundhog Day) are essentially similar realizations of each person’s hab its\, then we can build up a circadian event model to summarize the data f or each individual. We are currently evaluating a simple mixed component model\, Gaussian Process models and state space models. With these models in place we can imagine building a dynamic Hawkes graph to help evaluate circadian-based decisions for optimizing health by providing predictive fe edback on choices for the daily habits involved with eating and sleeping.. \n************************************************************************ ******************************\nTopic: AMS Weekly Seminar w/ Tom Woolf\nTi me: Apr 30\, 2020 01:30 PM Eastern Time (US and Canada)\nJoin Zoom Meeting \nhttps://wse.zoom.us/j/94839739367\nMeeting ID: 948 3973 9367\nOne tap mo bile\n+16465588656\,\,94839739367# US (New York)\n+13126266799\,\,94839739 367# US (Chicago)\nDial by your location\n+1 646 558 8656 US (New York)\n+ 1 312 626 6799 US (Chicago)\n+1 301 715 8592 US\n+1 346 248 7799 US (Houst on)\n+1 669 900 6833 US (San Jose)\n+1 253 215 8782 US\nMeeting ID: 948 39 73 9367\nFind your local number: https://wse.zoom.us/u/acgRGEZiLc\nJoin by SIP\n94839739367@zoomcrc.com\nJoin by H.323\n162.255.37.11 (US West)\n162 .255.36.11 (US East)\n221.122.88.195 (China)\n115.114.131.7 (India Mumbai) \n115.114.115.7 (India Hyderabad)\n213.19.144.110 (EMEA)\n103.122.166.55 ( Australia)\n209.9.211.110 (Hong Kong\nChina)\n64.211.144.160 (Brazil)\n69. 174.57.160 (Canada)\n207.226.132.110 (Japan)\nMeeting ID: 948 3973 9367 DTSTART;TZID=America/New_York:20200430T133000 DTEND;TZID=America/New_York:20200430T143000 SEQUENCE:0 SUMMARY:AMS Weekly Zoom Seminar w/ Tom Woolf from School of Medicine\, Depa rtment of Physiology URL:https://engineering.jhu.edu/ams/events/ams-weekly-seminar-w-tom-woolf/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: ** Circadian Event Streams: Initial Models for Prediction and Con
trol

**Abstract:** Our daily habits can have long-te
rm health consequences. We are evaluating a simple idea for health: eatin
g within a shorter interval of time (for example 6-8 hours) each day rathe
r than the often found 10-14 hours of eating. To evaluate this idea\, wit
h funding from the AHA\, we have developed an App (Daily24) and have colle
cted daily times for meals and sleep from more than 500 individuals over s
ix months of time. This event data can be interpreted as repeated samples
from each individuals circadian day\, as an event stream. If multiple da
ys (like the movie Groundhog Day) are essentially similar realizations of
each person’s habits\, then we can build up a circadian event model to sum
marize the data for each individual. We are currently evaluating a simple
mixed component model\, Gaussian Process models and state space models.
With these models in place we can imagine building a dynamic Hawkes graph
to help evaluate circadian-based decisions for optimizing health by provid
ing predictive feedback on choices for the daily habits involved with eati
ng and sleeping..

************************************************ ******************************************************

\nTopic: AMS
Weekly Seminar w/ Tom Woolf

\nTime: Apr 30\, 2020 01:30 PM Eastern Ti
me (US and Canada)

Join Zoom Meeting

\nhttps://wse.zoom.us/j/94839739367

Meetin
g ID: 948 3973 9367

\nOne tap mobile

\n+16465588656\,\,948397393
67# US (New York)

\n+13126266799\,\,94839739367# US (Chicago)

Dial by your location

\n+1 646 558 8656 US (New York)

\n+1 312
626 6799 US (Chicago)

\n+1 301 715 8592 US

\n+1 346 248 7799 US
(Houston)

\n+1 669 900 6833 US (San Jose)

\n+1 253 215 8782 US**\nMeeting ID: 948 3973 9367\nFind your local number: https://wse
.zoom.us/u/acgRGEZiLc**

Join by SIP

\n94839739367@zoomcrc.com\n

Join by H.323

\n162.255.37.11 (US West)

\n162.255.36.11 (
US East)

\n221.122.88.195 (China)

\n115.114.131.7 (India Mumbai)

\n115.114.115.7 (India Hyderabad)

\n213.19.144.110 (EMEA)

\n103.122.166.55 (Australia)

\n209.9.211.110 (Hong Kong

\nChina)

\n64.211.144.160 (Brazil)

\n69.174.57.160 (Canada)

\n207.2
26.132.110 (Japan)

\nMeeting ID: 948 3973 9367

**It is a pleas
ure for me to kick off this semester’s AMS seminar series this coming Thursday\, Sept 3 at 1:30pm**. This first seminar is going to b
e a meet-and-greet\, with a few announcements about how the seminar is goi
ng to run this semester in a fully online manner. We do intend to have spe
akers virtually visit Hopkins and even meet and discuss with people. It wi
ll just be in a different format. More details on Thursday!

The fo llowing is the passcode protected link for you to access the Zoom meeting. This is recurring meeting\, so the same link should be used every Thursda y this Fall. For students\, the following information should also be avail able from the Blackboard page for the Department seminar EN.553.801.01.F20 .

\nhttps://wse.zoom.us/j/98200438645?pwd=d3M3WEljc0sxd3BRQld UU3dudzhvdz09

\n\n

In case it does not work\, please use the following information:

\nMeeting ID: 982 0043 8645

\nPassc ode: 374212

\nTo avoid instances of zoom-bombing\, please do not sha re the link above with anyone else.

\n\n

**Important: <
/strong>We still have not been able to fill our Sept 10 slot for the seminar. So if one of you can save the day and give a
cool scientific talk to kick us off next week\, that would be awe
some. Please email me if you are interested and available.**

See you all this Thursday at 1:30pm!

\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28002@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Numerical tolerance for spectral decompositions of rando m matrices\n \nAbstract: The computation of parametric estimates often inv olves iterative numerical approximations\, which introduce numerical error . But when these estimates depend on random observations\, they necessaril y involve statistical error as well. Thus the common approach of minimizin g numerical error without accounting for inherent statistical error can be both costly and wasteful\, since it results in no improvement to the esti mator’s accuracy. We quantify this tradeoff between numerical and statisti cal error in a problem of estimating the eigendecomposition for the mean o f a random matrix from its observed value\, and show that one can save sig nificant computation by terminating the iterative procedure early\, with n o loss of accuracy. We demonstrate this in a setting of estimating the lat ent positions of a random network from the observed adjacency matrix\, on real and simulated data.\n \nYour cloud recording is now available.\nTopic : AMS Department Seminar (Fall 2020)\nDate: Sep 10\, 2020 12:18 PM Eastern Time (US and Canada)\nFor host only\, click here to view your recording ( Viewers cannot access this page):\nhttps://wse.zoom.us/recording/detail?me eting_id=o3QrttwgRpWP7tUxvIeD0g%3D%3D\nShare recording with viewers:\nhttp s://wse.zoom.us/rec/share/4AKeQRT7O46d3cCsr-82-YqVzqfi58sHJ42n-zFBIQscU7jF BSIzNelTMzVA7GXP.IR-GocHrS2lpCmpH Passcode: L+58iB^b DTSTART;TZID=America/New_York:20200910T133000 DTEND;TZID=America/New_York:20200910T143000 SEQUENCE:0 SUMMARY:AMS Weekly Seminar w/ Zachary Lubberts (AMS) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-weekly-seminar-w-zachary-lub berts-ams-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
**: Numerical tolerance for spectral decompositions of random matri
ces

\n

**Abstract**: The computation of parame
tric estimates often involves iterative numerical approximations\, which i
ntroduce numerical error. But when these estimates depend on random observ
ations\, they necessarily involve statistical error as well. Thus the comm
on approach of minimizing numerical error without accounting for inherent
statistical error can be both costly and wasteful\, since it results in no
improvement to the estimator’s accuracy. We quantify this tradeoff betwee
n numerical and statistical error in a problem of estimating the eigendeco
mposition for the mean of a random matrix from its observed value\, and sh
ow that one can save significant computation by terminating the iterative
procedure early\, with no loss of accuracy. We demonstrate this in a setti
ng of estimating the latent positions of a random network from the observe
d adjacency matrix\, on real and simulated data.

\n

Your cl oud recording is now available.

\nTopic: AMS Department Seminar (Fal
l 2020)

\nDate: Sep 10\, 2020 12:18 PM Eastern Time (US and Canada)\n

For host only\, click here to view your recording (Viewers cannot a
ccess this page):

\nhttps://wse.zoom.us/recording/detail
?meeting_id=o3QrttwgRpWP7tUxvIeD0g%3D%3D

Share recording with
viewers:

\nhttps://ws
e.zoom.us/rec/share/4AKeQRT7O46d3cCsr-82-YqVzqfi58sHJ42n-zFBIQscU7jFBSIzNe
lTMzVA7GXP.IR-GocHrS2lpCmpH Passcode: L+58iB^b

**Title
**: Looking Forward to Backward-Looking Rates: A Modeling Framework
for Term Rates Replacing LIBOR

\n

**Abstract**: LIBOR and other similar IBOR rates represent the cost of short-term fun
ding among large global banks\, and are the reference rates in millions of
financial contracts with a total market exposure worldwide of 400 trillio
n dollars. Lack of liquidity in the unsecured short-term lending market\,
as well as evidence of LIBOR manipulation during the 2007-09 credit crisis
\, led regulators to identify new rate benchmarks. In this talk\, we intro
duce and model the new new interest-rate benchmarks and their compounded s
etting-in-arrears term rates\, which will be replacing IBORs globally. We
show that the classic interest-rate modeling framework can be naturally ex
tended to describe the evolution of both the forward-looking (IBOR-like) a
nd backward-looking (setting-in-arrears) term rates using the same stochas
tic process. We then introduce an extension of the LIBOR Market Model to b
ackward-looking rates. Applications will be presented and numerical exampl
es showcased.

\n

Your cloud recording is now available.

\nTopic: AMS Department Seminar (Fall 2020)

\nDate: Sep 17\, 2020
01:18 PM Eastern Time (US and Canada)

For host only\, click here t
o view your recording (Viewers cannot access this page):

\nhttps://wse.zoom.us/recording/detail?meeting_id=VX%2FqA9N%2FQ%2By
nIoBw1R9Mzg%3D%3D

Share recording with viewers:

\nhttps://wse.zoom.us/rec/share/CBA
f80Hb_1ZlYLpz8DoKhdOwx7k9F1zOsmr4EUdXV9LTgmF5TNou-ugp9RkERWlP.bTMc0SwGWnbz
4dqY Passcode: uL5&+@!1

**Title
:** Ingredients matter: Quick and easy recipes for estimating clust
ers\, manifolds\, and epidemics

**Abstract:** Data sc
ience resembles the culinary arts in the sense that better ingredients all
ow for better results. We consider three instances of this phenomenon. Fir
st\, we estimate clusters in graphs\, and we find that more signal allows
for faster estimation. Here\, “signal” refers to having more edges within
planted communities than across communities. Next\, in the context of mani
folds\, we find that an informative prior allows for estimates of lower er
ror. In particular\, we apply the prior that the unknown manifold enjoys a
large\, unknown symmetry group. Finally\, we consider the problem of esti
mating parameters in epidemiological models\, where we find that a certain
diversity of data allows one to design estimation algorithms with provabl
e guarantees. In this case\, data diversity refers to certain combinatoria
l features of the social network. Joint work with Jameson Cahill\, Charles
Clum\, Hans Parshall\, and Kaiying Xie.

\n

Your cloud reco rding is now available.

\nTopic: AMS Department Seminar (Fall 2020)< br />\nDate: Sep 24\, 2020 12:59 PM Eastern Time (US and Canada)

\nF
or host only\, click here to view your recording (Viewers cannot access th
is page):

\nhttps://wse.zoom.us/recording/detail?meeti
ng_id=D3Hbv%2Fe5QXKcE0FUgQFdVg%3D%3D

Share recording with view
ers:

\nhttps://wse.zo
om.us/rec/share/fChPLSraWeF5AhXKbY0jkOOfv0zAhnX4d6qWeWVa9_Goyup0aLcKi0VETt
7T2Wan.xDWyUYFDujlhPvqt Passcode: 79W*iV@G

**Title
**: Learning with entropy-regularized optimal transport

** Abstract**: Entropy-regularized OT (EOT) was first introduced
by Cuturi in 2013 as a solution to the computational burden of OT for mach
ine learning problems. In this talk\, after studying the properties of EOT
\, we will introduce a new family of losses between probability measures c
alled Sinkhorn Divergences. Based on EOT\, this family of losses actually
interpolates between OT (no regularization) and MMD (infinite regularizati
on). We will illustrate these theoretical claims on a set of learning prob
lems formulated as minimizations over the space of measures.

\n

Your cloud recording is now available.

\nTopic: AMS Department
Seminar (Fall 2020)

\nDate: Oct 1\, 2020 01:21 PM Eastern Time (US an
d Canada)

For host only\, click here to view your recording (Viewe
rs cannot access this page):

\nhttps://wse.zoom.us/rec
ording/detail?meeting_id=zXLatYK3QFieUi0kc9N%2BRA%3D%3D

Share
recording with viewers:

\nhttps://wse.zoom.us/rec/share/cuYXVU99jAdaLuq4FfIew8x7dxjZ40hORkqQyQp
fPCAB_B69q1XeDJmLFw5yuZrb.QIj2wn6azpc4V96E Passcode: *$xMJcX6

**Title
: **Subgraph isomorphism via partial differentiation

** Abstract: **In this talk I will discuss a recent approach to the
algorithmic problem of

Part of this talk is based on joint work with Cornelius Brand.

\n\n< p>Here is the link and the meeting info:\n

https://wse.zoom.u s/j/98200438645?pwd=d3M3WEljc0sxd3BRQldUU3dudzhvdz09

\nMeeting I D: 982 0043 8645

\nPasscode: 374212

\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28053@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: A Geometric Understanding of Deep Learning\nAbstract: Th is work introduces an optimal transportation (OT) view of generative adver sarial networks (GANs). Natural datasets have intrinsic patterns\, which c an be summarized as the manifold distribution principle: the distribution of a class of data is close to a low-dimensional manifold. GANs mainly acc omplish two tasks: manifold learning and probability distribution transfor mation. The latter can be carried out using the classical OT method. From the OT perspective\, the generator computes the OT map\, while the discrim inator computes the Wasserstein distance between the generated data distri bution and the real data distribution\; both can be reduced to a convex ge ometric optimization process. Furthermore\, OT theory discovers the intrin sic collaborative—instead of competitive—relation between the generator an d the discriminator\, and the fundamental reason for mode collapse. We als o propose a novel generative model\, which uses an autoencoder (AE) for ma nifold learning and OT map for probability distribution transformation. Th is AE–OT model improves the theoretical rigor and transparency\, as well a s the computational stability and efficiency\; in particular\, it eliminat es the mode collapse. The experimental results validate our hypothesis\, a nd demonstrate the advantages of our proposed model.\n \nMeeting Recording :\nhttps://wse.zoom.us/rec/share/NmcAgaDnXT0YgkEVAa5vX2TaEDXq28gpdwBxve9QX RXfoi9vlqG_9IyqV8d337Fq.4piGXPQfnZi1oDCI\nAccess Passcode: Fc1=nKmE DTSTART;TZID=America/New_York:20201015T133000 DTEND;TZID=America/New_York:20201015T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ David Gu (Stony Brook University) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-david-gu-stony-bro ok-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** A Geometric Understanding of Deep Learning

**Abst
ract:** This work introduces an optimal transportation (OT) view of
generative adversarial networks (GANs). Natural datasets have intrinsic p
atterns\, which can be summarized as the manifold distribution principle:
the distribution of a class of data is close to a low-dimensional manifold
. GANs mainly accomplish two tasks: manifold learning and probability dist
ribution transformation. The latter can be carried out using the classical
OT method. From the OT perspective\, the generator computes the OT map\,
while the discriminator computes the Wasserstein distance between the gene
rated data distribution and the real data distribution\; both can be reduc
ed to a convex geometric optimization process. Furthermore\, OT theory dis
covers the intrinsic collaborative—instead of competitive—relation between
the generator and the discriminator\, and the fundamental reason for mode
collapse. We also propose a novel generative model\, which uses an autoen
coder (AE) for manifold learning and OT map for probability distribution t
ransformation. This AE–OT model improves the theoretical rigor and transpa
rency\, as well as the computational stability and efficiency\; in particu
lar\, it eliminates the mode collapse. The experimental results validate o
ur hypothesis\, and demonstrate the advantages of our proposed model.

\n

Meeting Recording:

\n\nAccess P asscode: Fc1=nKmE

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28057@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: The Importance of Being Correlated: Implications of Depe ndence in Joint Spectral Inference across Multiple Networks\nAbstract: Spe ctral inference on multiple networks is a rapidly-developing subfield of g raph statistics. Recent work has demonstrated that joint\, or simultaneous \, spectral embedding of multiple independent network realizations can del iver more accurate estimation than individual spectral decompositions of t hose same networks. Little attention has been paid\, however\, to the netw ork correlation that such joint embedding procedures necessarily induce. I n this paper\, we present a detailed analysis of induced correlation in a {\\em generalized omnibus} embedding for multiple networks. We show that o ur embedding procedure is flexible and robust\, and\, moreover\, we prove a central limit theorem for this embedding and explicitly compute the limi ting covariance. We examine how this covariance can impact inference in a network time series\, and we construct an appropriately calibrated omnibus embedding that can detect changes in real biological networks that previo us embedding procedures could not discern. Our analysis confirms that the effect of induced correlation can be both subtle and transformative\, with import in theory and practice.\n \nYour cloud recording is now available. \nTopic: AMS Department Seminar (Fall 2020)\nDate: Oct 29\, 2020 01:18 PM Eastern Time (US and Canada)\nFor host only\, click here to view your reco rding (Viewers cannot access this page):\nhttps://wse.zoom.us/recording/de tail?meeting_id=%2FQgtuDojRnaeVqoi0IWcuw%3D%3D\nShare recording with viewe rs:\nhttps://wse.zoom.us/rec/share/1fETcswYJGsGY6HgXmvs4Xd1EaAI1ThzZceI3Ah mxD6c1g-0dkyxc1QLJ5BJFUd4.bXWPzF3wH4zpoKm6 Passcode: ?a9s6%xR DTSTART;TZID=America/New_York:20201029T133000 DTEND;TZID=America/New_York:20201029T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Vince Lyzinski (University of Maryland\, College Pa rk) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-vince-lyzinski-uni versity-of-maryland-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **The Importance of Being Correlated: Implications of Dependence
in Joint Spectral Inference across Multiple Networks

**Abstr
act:** Spectral inference on multiple networks is a rapidly-develop
ing subfield of graph statistics. Recent work has demonstrated that joint\
, or simultaneous\, spectral embedding of multiple independent network rea
lizations can deliver more accurate estimation than individual spectral de
compositions of those same networks. Little attention has been paid\, howe
ver\, to the network correlation that such joint embedding procedures nece
ssarily induce. In this paper\, we present a detailed analysis of induced
correlation in a {\\em generalized omnibus} embedding for multiple network
s. We show that our embedding procedure is flexible and robust\, and\, mor
eover\, we prove a central limit theorem for this embedding and explicitly
compute the limiting covariance. We examine how this covariance can impac
t inference in a network time series\, and we construct an appropriately c
alibrated omnibus embedding that can detect changes in real biological net
works that previous embedding procedures could not discern. Our analysis c
onfirms that the effect of induced correlation can be both subtle and tran
sformative\, with import in theory and practice.

\n

Your cl oud recording is now available.

\nTopic: AMS Department Seminar (Fal
l 2020)

\nDate: Oct 29\, 2020 01:18 PM Eastern Time (US and Canada)\n

For host only\, click here to view your recording (Viewers cannot a
ccess this page):

\nhttps://wse.zoom.us/recording/deta
il?meeting_id=%2FQgtuDojRnaeVqoi0IWcuw%3D%3D

Share recording w
ith viewers:

\nhttps:
//wse.zoom.us/rec/share/1fETcswYJGsGY6HgXmvs4Xd1EaAI1ThzZceI3AhmxD6c1g-0dk
yxc1QLJ5BJFUd4.bXWPzF3wH4zpoKm6 Passcode: ?a9s6%xR

**Title
:** Trainability and accuracy of artificial neural networks

**Abstract: **The methods and models of machine learning (ML
) are rapidly becoming de facto tools for the analysis and interpretation
of large data sets. Complex classification tasks such as speech and image
recognition\, automatic translation\, decision making\, etc. that were out
of reach a decade ago are now routinely performed by computers with a hig
h degree of reliability using (deep) neural networks. These performances s
uggest that DNNs may approximate high-dimensional functions with controlla
bly small errors\, potentially outperforming standard interpolation method
s based e.g. on Galerkin truncation or finite elements that have been the
workhorses of scientific computing. In support of this prospect\, in this
talk I will present results about the trainability and accuracy of neural
networks\, obtained by mapping the parameters of the network to a system o
f interacting particles relaxing on a potential determined by the loss fun
ction. This mapping can be used to prove a dynamical variant of the univer
sal approximation theorem showing that the optimal neural network represen
tation can be attained by (stochastic) gradient descent\, with a approxima
tion error scaling as the inverse of the network size. I will also show h
ow these findings can be used to accelerate the training of networks and
optimize their architecture\, using e.g nonlocal transport involving birth
/death processes in parameter space.

\n

Meeting recording l
ink:

\nhttps://wse.zo
om.us/rec/share/WO_nf9zgmnKfPniZsBSzECdAdNBp5wiyMP34tsMNAbb1jgVtgqQAV4YtrJ
jCGPY7.S1xykYLxepdibbZQ

Passcode: MH!7JDN2

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28101@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Monte-Carlo methods for high-dimensional problems in qua ntitative finance\nAbstract: Stochastic optimal control has been an effect ive tool for many problems in quantitative finance and financial economics . Although it provides much needed quantitative modeling for such problems \, until recently it has been intractable in high-dimensional settings. Ho wever\, several recent studies report impressive numerical results: Chered ito et al. studied the optimal stopping problem (a problem closely connect ed to pricing American-type options in quantitative finance finale) provid ing tight error bounds and an efficient algorithm in problems in up to 100 dimensions. Buehler et al.\, on the other hand\, consider the problem of hedging and again report results for high-dimensional problems that were intractable. These papers use a Monte Carlo type algorithm combined with d eep neural networks proposed by E. Han and Jentzen. In this talk I will o utline this approach and discuss its properties. Numerical results\, whil e validating the power of the method in high dimensions\, also show the de pendence on the dimension and the size of the training data. This is join t work with Max Reppen of Boston University.\n \nHere is the link and the meeting info:\nhttps://wse.zoom.us/j/98200438645?pwd=d3M3WEljc0sxd3BRQldUU 3dudzhvdz09\nMeeting ID: 982 0043 8645\nPasscode: 374212\nEnjoy. DTSTART;TZID=America/New_York:20201112T133000 DTEND;TZID=America/New_York:20201112T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Mete Soner (Princeton University) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-mete-soner-princet on-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Monte-Carlo methods for high-dimensional problems in quantitati
ve finance

**Abstract:** Stochastic optimal control h
as been an effective tool for many problems in quantitative finance and fi
nancial economics. Although it provides much needed quantitative modeling
for such problems\, until recently it has been intractable in high-dimensi
onal settings. However\, several recent studies report impressive numerica
l results: Cheredito et al. studied the optimal stopping problem (a proble
m closely connected to pricing American-type options in quantitative finan
ce finale) providing tight error bounds and an efficient algorithm in prob
lems in up to 100 dimensions. Buehler et al.\, on the other hand\, consid
er the problem of hedging and again report results for high-dimensional pr
oblems that were intractable. These papers use a Monte Carlo type algorith
m combined with deep neural networks proposed by E. Han and Jentzen. In t
his talk I will outline this approach and discuss its properties. Numeric
al results\, while validating the power of the method in high dimensions\,
also show the dependence on the dimension and the size of the training da
ta. This is joint work with Max Reppen of Boston University.

\n

Here is the link and the meeting info:

\nhttps://wse.zo om.us/j/98200438645?pwd=d3M3WEljc0sxd3BRQldUU3dudzhvdz09

\nMeeti ng ID: 982 0043 8645

\nPasscode**: 374212**

Enj oy.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28111@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Goldman Lecture 11-19-2020(pdf)\nTitle: Lifting for Simplicity: Concise Descriptions of Convex Sets \nAbstract: A common theme in many ar eas of mathematics is to find a simpler representation of an object indire ctly by expressing it as the projection of an object in some higher-dimens ional space. In 1991 Yannakakis proved a remarkable connection between a lifted representation of a polytope and the nonnegative rank of a matrix a ssociated with the polytope. In recent years\, this idea has been generali zed to cone lifts of convex sets\, with applications in\, and tools coming from\, many areas of mathematics and theoretical computer science. This t alk will survey the central ideas\, results\, and questions in this field. \n\nBio: Rekha Thomas is the Walker Family Endowed Professor of Mathematic s at\nthe University of Washington. She received her Ph.D. in Operations R esearch from Cornell University in 1994 followed by postdoctoral work at Y ale and Berlin. Her research interests are in Optimization and Applied Alg ebraic Geometry.\nCloud recording is now available.\nTopic: AMS Department Seminar (Fall 2020)\nDate: Nov 19\, 2020 01:09 PM Eastern Time (US and Ca nada)\nhttps://wse.zoom.us/rec/share/TOtVoSQbpp6QITONuy0Mqg6bVsfxYrN6BGJxf X2tw_Dho0NPzqBzcMRhmmM4V0hu.mTMckbQU6k0nBYFv\nPasscode: +$0sH0iT \n DTSTART;TZID=America/New_York:20201119T133000 DTEND;TZID=America/New_York:20201119T143000 SEQUENCE:0 SUMMARY:The Goldman Distinguished Lecture Series: Rekha Thomas (University of Washington\, Seattle) on Zoom URL:https://engineering.jhu.edu/ams/events/the-goldman-distinguished-lectur e-series-rekha-thomas-university-of-washington-on-zoom/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://engineering.jhu.edu/ams/wp-content/uploa ds/2020/09/rekhathomas-300x200.jpg\;245\;163\,medium\;https://engineering. jhu.edu/ams/wp-content/uploads/2020/09/rekhathomas-300x200.jpg\;245\;163\, large\;https://engineering.jhu.edu/ams/wp-content/uploads/2020/09/rekhatho mas-300x200.jpg\;245\;163\,full\;https://engineering.jhu.edu/ams/wp-conten t/uploads/2020/09/rekhathomas-300x200.jpg\;245\;163 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Goldman Lecture 11-19-2020(pdf)**

**Title: **Lifting for Simplicity: Concise Descriptions of Convex
Sets

**Abstract:** A common theme in many areas of ma
thematics is to find a simpler representation of an object indirectly by e
xpressing it as the projection of an object in some higher-dimensional spa
ce. In 1991 Yannakakis proved a remarkable connection between a lifted re
presentation of a polytope and the nonnegative rank of a matrix associated
with the polytope. In recent years\, this idea has been generalized to co
ne lifts of convex sets\, with applications in\, and tools coming from\, m
any areas of mathematics and theoretical computer science. This talk will
survey the central ideas\, results\, and questions in this field.\n

**Bio:** Rekha Thomas is the Walker Family Endowed Pro
fessor of Mathematics at

\nthe University of Washington. She received
her Ph.D. in Operations Research from Cornell University in 1994 followed
by postdoctoral work at Yale and Berlin. Her research interests are in Op
timization and Applied Algebraic Geometry.

**Cloud recording
is now available.**

Topic: AMS Department Seminar (Fall 20
20)

\nDate: Nov 19\, 2020 01:09 PM Eastern Time (US and Canada)

**Passcode: +$0sH0iT **

< /p>\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28110@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Beyond Mean-field Limits for Large-scale Stochastic Syst ems\nAbstract: Many large-scale stochastic systems that arise as models in a variety of fields including neuroscience\, epidemiology\, physics\, eng ineering and computer science\, can be described in terms of a large colle ction of “locally” interacting Markov chains\, where each particle’s trans ition rates depend only on the states of neighboring particles with respec t to an underlying (possibly random) graph. Since these dynamics are typic ally not amenable to exact analysis\, a common paradigm is to instead stud y a more tractable approximation that is asymptotically exact as the numbe r of particles goes to infinity in order to gain qualitative insight into the system. A frequently used approximation is the mean-field approximatio n\, which works provably well when the interaction graph is sufficiently d ense. However\, it performs quite poorly when the interaction graph is spa rse\, which is the case in many applications. We describe new asymptotical ly accurate approximations that can be developed in the latter setting\, a nd show how they perform in various applications. This is joint work wit h A. Ganguly.\nBio: Kavita Ramanan is the Roland George Dwight Richardson University Professor and Associate Chair at the Division of Applied Mathem atics\, Brown University. Her field of research is probability theory\, st ochastic processes and their applications. She has received several honors in recognition of her research\, including a Guggenheim Fellowship\, a Di stinguished Alumni Award from IIT-Bombay\, and the Newton Award from the D epartment of Defense (DoD)\, all in 2020\, a Simons Fellowship in 2018\, a n IMS Medallion in 2015 and the Erlang Prize from the INFORMS Applied Prob ability Society in 2006 for “outstanding contributions to applied probabil ity.” She serves on multiple editorial boards and is an elected fellow o f several societies\, including AAAS\, AMS\, INFORMS\, IMS and SIAM.\nMore information about her can be found at her website:\nhttps://www.brown.edu /academics/applied-mathematics/faculty/kavita-ramanan/home\n \nYour cloud recording is now available.\nhttps://wse.zoom.us/rec/share/Sm8YAbi3gBRLub6 b3VD189QIkJRpo3LCrjCpoF0U-IGJ-jj2qatcKEtlwybSftiQ.MolOusPltsRpYQR8\nPassco de: b#mJ2P+@ DTSTART;TZID=America/New_York:20201203T133000 DTEND;TZID=America/New_York:20201203T143000 SEQUENCE:0 SUMMARY:The Acheson J. Duncan Lecture Series: AMS Seminar: Kavita Ramanan ( Brown University) on Zoom URL:https://engineering.jhu.edu/ams/events/the-acheson-j-duncan-lecture-ser ies-ams-seminar-kavita-ramanan-brown-university-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
: **Beyond Mean-field Limits for Large-scale Stochastic Systems

**Abstract: **Many large-scale stochastic systems that a
rise as models in a variety of fields including neuroscience\, epidemiolog
y\, physics\, engineering and computer science\, can be described in terms
of a large collection of “locally” interacting Markov chains\, where each
particle’s transition rates depend only on the states of neighboring part
icles with respect to an underlying (possibly random) graph. Since these d
ynamics are typically not amenable to exact analysis\, a common paradigm i
s to instead study a more tractable approximation that is asymptotically e
xact as the number of particles goes to infinity in order to gain qualitat
ive insight into the system. A frequently used approximation is the mean-f
ield approximation\, which works provably well when the interaction graph
is sufficiently dense. However\, it performs quite poorly when the interac
tion graph is sparse\, which is the case in many applications. We describe
new asymptotically accurate approximations that can be developed in the l
atter setting\, and show how they perform in various applications. This
is joint work with A. Ganguly.

**Bio:** Kavita Ramana
n is the Roland George Dwight Richardson University Professor and Associat
e Chair at the Division of Applied Mathematics\, Brown University. Her fie
ld of research is probability theory\, stochastic processes and their appl
ications. She has received several honors in recognition of her research\,
including a Guggenheim Fellowship\, a Distinguished Alumni Award from IIT
-Bombay\, and the Newton Award from the Department of Defense (DoD)\, all
in 2020\, a Simons Fellowship in 2018\, an IMS Medallion in 2015 and the E
rlang Prize from the INFORMS Applied Probability Society in 2006 for “outs
tanding contributions to applied probability.” She serves on multiple ed
itorial boards and is an elected fellow of several societies\, including A
AAS\, AMS\, INFORMS\, IMS and SIAM.

More information about her can
be found at her website:

\nhttps://www.brown.edu/aca
demics/applied-mathematics/faculty/kavita-ramanan/home

\n

Your cloud recording is now available.

\n\nPasscode: b#mJ2P+@

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28780@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Advancing scalable\, provable optimization methods in s emidefinite & polynomial programs\nAbstract \nOptimization is a broad area with ramifications in many disciplines\, including machine learning\, con trol theory\, signal processing\, robotics\, computer vision\, power syste ms\, and quantum information. I will talk about some novel algorithmic and theoretical results in two broad classes of optimization problems. The fi rst class of problems are semidefinite programs (SDP). I will present the first polynomial time guarantees for the Burer-Monteiro method\, which is widely used for solving large scale SDPs. I will also discuss some general guarantees on the quality of SDP solutions for parameter estimation probl ems. The second class of problems I will consider are polynomial systems. I will introduce a novel technique for solving polynomial systems that\, b y taking advantage of graphical structure\, is able to outperform existing techniques by orders of magnitude.\n DTSTART;TZID=America/New_York:20210108T110000 DTEND;TZID=America/New_York:20210108T120000 SEQUENCE:0 SUMMARY:Special Seminar – Faculty Candidate Diego Cifuentes URL:https://engineering.jhu.edu/ams/events/special-seminar-faculty-candidat e-diego-cifuentes/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
** – Advancing scalable\, provable optimization methods in semidefi
nite & polynomial programs

**Abstract **

Opti mization is a broad area with ramifications in many disciplines\, includin g machine learning\, control theory\, signal processing\, robotics\, compu ter vision\, power systems\, and quantum information. I will talk about so me novel algorithmic and theoretical results in two broad classes of optim ization problems. The first class of problems are semidefinite programs (S DP). I will present the first polynomial time guarantees for the Burer-Mon teiro method\, which is widely used for solving large scale SDPs. I will a lso discuss some general guarantees on the quality of SDP solutions for pa rameter estimation problems. The second class of problems I will consider are polynomial systems. I will introduce a novel technique for solving pol ynomial systems that\, by taking advantage of graphical structure\, is abl e to outperform existing techniques by orders of magnitude.

\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28686@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – The Landscape of the Proximal Point Method for Nonconve x-Nonconcave Minimax Optimization\nAbstract \nMinimax optimization has bec ome a central tool for modern machine learning with applications in genera tive adversarial networks\, robust training\, reinforcement learning\, etc . These applications are often nonconvex-nonconcave\, but the existing the ory is unable to identify and deal with the fundamental difficulties this poses. In this talk\, we will overcome these limitations\, describing the convergence landscape of the classic proximal point method on nonconvex-no nconcave minimax problems. Our key theoretical insight lies in identifying a modified objective\, generalizing the Moreau envelope\, that smoothes t he original objective and convexifies and concavifies it based on the inte raction between the minimizing and maximizing variables. When interaction is sufficiently strong\, we derive global linear convergence guarantees. W hen interaction is weak\, we derive local linear convergence guarantees un der proper initialization. Between these two settings\, we show undesirabl e behaviors like divergence and cycling can occur.\nBio: Benjamin Grimmer is a PhD student in Operations Research at Cornell University. He receiv ed his BS and MS degrees in Computer Science from Illinois Institute of Te chnology. His research focuses on theoretical foundations of optimizatio n. \n Please email Meg Tully – mtully4@jhu.edu for more information DTSTART;TZID=America/New_York:20210111T103000 DTEND;TZID=America/New_York:20210111T113000 SEQUENCE:0 SUMMARY:Special Seminar – Faculty Candidate Ben Grimmer URL:https://engineering.jhu.edu/ams/events/special-seminar-faculty-candidat e-ben-grimmer/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
** – The Landscape of the Proximal Point Method for Nonconvex-Nonco
ncave Minimax Optimization

**Abstract **

Mini max optimization has become a central tool for modern machine learning wit h applications in generative adversarial networks\, robust training\, rein forcement learning\, etc. These applications are often nonconvex-nonconcav e\, but the existing theory is unable to identify and deal with the fundam ental difficulties this poses. In this talk\, we will overcome these limit ations\, describing the convergence landscape of the classic proximal poin t method on nonconvex-nonconcave minimax problems. Our key theoretical ins ight lies in identifying a modified objective\, generalizing the Moreau en velope\, that smoothes the original objective and convexifies and concavif ies it based on the interaction between the minimizing and maximizing vari ables. When interaction is sufficiently strong\, we derive global linear c onvergence guarantees. When interaction is weak\, we derive local linear c onvergence guarantees under proper initialization. Between these two setti ngs\, we show undesirable behaviors like divergence and cycling can occur.

\n**Bio**: Benjamin Grimmer is a PhD student in Operations Research at C
ornell University. He received his BS a
nd MS degrees in Computer Science from Illinois Institute of Technology.
His research focuses on theoretical

Please email Meg Tully – mtully4@jhu.edu for more information

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28689@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Decentralized stochastic gradient descent and beyond \n Abstract\nStochastic gradient descent (SGD) methods have recently found wi de applications in large-scale data analysis\, especially in machine learn ing. These methods are very attractive to process online streaming data as they scan through the dataset only once but still generate solutions with acceptable accuracy. However\, it is known that classical SGD methods are ineffective in processing streaming data distributed over multi-agent net work systems (e.g.\, sensor and social networks)\, mainly due to the high communication costs incurred by these methods. In this talk\, we present a new class of SGD methods\, referred to as stochastic decentralized commun ication sliding methods\, which can significantly reduce the aforementione d communication costs for decentralized stochastic optimization and machin e learning. We show that these methods can skip inter-node communications while performing SGD iterations. As a result\, they require a substantiall y smaller number of communication rounds than existing decentralized SGD\, while the total number of required stochastic subgradient computations ar e comparable to those optimal bounds achieved by classical centralized SGD type methods. We also develop new variants of these methods that can achi eve graph topology invariant gradient/sampling complexity when the problem is smooth and samples can be stored locally. \nBIO: Guanghui (George) Lan is an associate professor in the H. Milton Stewart School of Industrial a nd Systems Engineering at Georgia Institute of Technology since January 20 16. Dr. Lan was on the faculty of the Department of Industrial and Systems Engineering at the University of Florida from 2009 to 2015\, after earnin g his Ph.D. degree from Georgia Institute of Technology in August 2009. Hi s main research interests lie in optimization and machine learning. The ac ademic honors he received include the Mathematical Optimization Society Tu cker Prize Finalist (2012)\, INFORMS Junior Faculty Interest Group Paper C ompetition First Place (2012) and the National Science Foundation CAREER A ward (2013). Dr. Lan serves as an associate editor for Mathematical Progra mming\, SIAM Journal on Optimization and Computational Optimization and Ap plications. He is also an associate director of the Center for Machine Lea rning at Georgia Tech. \nFor Zoom information email Meg Tully – mtully 4@jhu.edu DTSTART;TZID=America/New_York:20210113T100000 DTEND;TZID=America/New_York:20210113T110000 SEQUENCE:0 SUMMARY:Special Seminar – Guanghui (George) Lan URL:https://engineering.jhu.edu/ams/events/special-seminar-guanghui-george- lan/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title**
– Decentralized stochastic gradient descent and beyond

**Abstract**

**BIO**: Guanghui (George) Lan is an associate professor in the H. Mil
ton Stewart School of Industrial and Systems Engineering at Georgia Instit
ute of Technology since January 2016. Dr. Lan was on the faculty of the De
partment of Industrial and Systems Engineering at the University of Florid
a from 2009 to 2015\, after earning his Ph.D. degree from Georgia Institut
e of Technology in August 2009. His main research interests lie in optimiz
ation and machine learning. The academic honors he received include the Ma
thematical Optimization Society Tucker Prize Finalist (2012)\, INFORMS Jun
ior Faculty Interest Group Paper Competition First Place (2012) and the Na
tional Science Foundation CAREER Award (2013). Dr. Lan serves as an associ
ate editor for Mathematical Programming\, SIAM Journal on Optimization and
Computational Optimization and Applications. He is also an associate dire
ctor of the Center for Machine Learning at Georgia Tech.

For Zoom information email Meg Tully – mtully4@jhu.edu

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28690@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – On Complexity of Constrained Nonconvex Optimization\nAb stract\nDeriving complexity guarantees for nonconvex optimization problems are driven by long standing theoretical interests and by their relevance to machine learning and data science. This talk discusses complexity of al gorithms for two important types of constrained nonconvex optimization pro blems: bound-constrained and nonlinear equality constrained optimization. Applications include nonnegative matrix factorization (NMF) and dictionary learning.\n \nFor nonconvex optimization with bound constraints\, we obse rve from the past work that pursuit of the state-of-art complexity guarant ees can compromise the practicality of an algorithm. Therefore\, we propos e two practical projected Newton types of methods with complexity guarante es matching the best known. The first method is a scaled variant of Bertse kas’ two-metric projection method\, with the best complexity guarantee to find an approximate first-order point. The second is a projected Newton-Co njugate Gradient method\, equipped with a competitive complexity guarantee to locate an approximate second-order point with high probability. Prelim inary numerical experiments on NMF indicate practicality of the latter alg orithm.\n \nFor nonconvex optimization with nonlinear equality constraints \, we analyze complexity of the proximal augmented Lagrangian (AL) framewo rk\, in which a Newton-Conjugate-Gradient scheme is used to find approxima te solutions of the subproblems. This scheme has three levels of iteration s\, and we obtain bounds on the number of iterations at each level.\n \nTh ese are joint works with Stephen J. Wright.\n \nFor zoom information email Meg Tully – mtully4@jhu.edu DTSTART;TZID=America/New_York:20210114T103000 DTEND;TZID=America/New_York:20210114T113000 SEQUENCE:0 SUMMARY:Special Seminar – Faculty Candidate Yue Xie URL:https://engineering.jhu.edu/ams/events/special-seminar-faculty-candidat e-yue-xie/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
** – On Complexity of Constrained
Nonconvex Optimization

**Abstract**

~~Deriving complexity guarantees for nonconve
x optimization problems are driven by long standing theoretical interests
and by their relevance to machine learning and data science. This talk dis
cusses complexity of algorithms for two important types of constrained non
convex optimization problems: bound-constrained and nonlinear equality con
strained optimization. Applications include nonnegative matrix factorizati
on (NMF) and dictionary learning.~~

\n

For nonconvex optimization with bound constraints\, we o bserve from the past work that pursuit of the state-of-art complexity guar antees can compromise the practicality of an algorithm. Therefore\, we pro pose two practical projected Newton types of methods with complexity guara ntees matching the best known. The first method is a scaled variant of Ber tsekas’ two-metric projection method\, with the best complexity guarantee to find an approximate first-order point. The second is a projected Newton -Conjugate Gradient method\, equipped with a competitive complexity guaran tee to locate an approximate second-order point with high probability. Pre liminary numerical experiments on NMF indicate practicality of the latter algorithm.

\n\n

For n onconvex optimization with nonlinear equality constraints\, we analyze com plexity of the proximal augmented Lagrangian (AL) framework\, in which a N ewton-Conjugate-Gradient scheme is used to find approximate solutions of t he subproblems. This scheme has three levels of iterations\, and we obtain bounds on the number of iterations at each level.

\n\n< p>These are joint works with Stephen J. W right.\n

\n

For zoom information email Meg Tully – mtully4@jhu.edu

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28784@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Optimal Transport for Inverse Problems and the Implicit Regularization\nAbstract \nOptimal transport has been one interesting top ic of mathematical analysis since Monge (1781). The problem’s close connec tions with differential geometry and kinetic descriptions were discovered within the past century\, and the seminal work of Kantorovich (1942) showe d its power to solve real-world problems. Recently\, we proposed the quadr atic Wasserstein distance from optimal transport theory for inverse proble ms\, tackling the classical least-squares method’s longstanding difficulti es such as nonconvexity and noise sensitivity. The work was soon adopted i n the oil industry. As we advance\, we discover that the advantage of chan ging the data misfit is more general in a broader class of data-fitting pr oblems by examining the preconditioning and “implicit” regularization effe cts of different mathematical metrics as the objective function in optimiz ation\, as the likelihood function in Bayesian inference\, and as the meas ure of residual in numerical solution to PDEs. DTSTART;TZID=America/New_York:20210119T100000 DTEND;TZID=America/New_York:20210119T110000 SEQUENCE:0 SUMMARY:Special Seminar – Faculty Candidate Yunan Yang URL:https://engineering.jhu.edu/ams/events/special-seminar-faculty-candidat e-yunan-yang/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Tit
le –** Optimal Transport for Inverse Problems and the Implicit
Regularization

**Abstract **

< span class='TextRun SCXW13976463 BCX9' lang='EN-US' xml:lang='EN-US' data- contrast='auto'>Optimal tran sport has been one interesting topic of mathematical analysis since Monge (1781). The problem’s close connections with differential geometry and kin etic descriptions were discovered within the past century\, and the semina l work of Kantorovich (1942) showed its power to solve real-world problems . Recently\, we proposed the quadratic Wasserstein distance from optimal t ransport theory for inverse problems\, tackling the classical least-square s method’s longstanding difficulties such as nonconvexity and noise sensit ivity. The work was soon adopted in the oil industry. As we advance\, we d iscover that the advantage of changing the data misfit is more general in a broader class of data-fitting problems by examining the preconditioning and “implicit” regularization effects of different mathematical metrics as the objective function in optimization\, as the likelihood function in Ba yesian inference\, and as the measure of residual in numerical solution to PDEs.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28692@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Data-driven optimization: test score algorithms and dis tributionally robust approach\nAbstract \nThe ever-increasing availability of data has motivated novel optimization models that can incorporate unce rtainty in problem parameters. Data-driven optimization frameworks integra te sophisticated statistical estimation parts into optimization frameworks . This often leads to computationally challenging formulations that mandat e new technologies to address scalability issues. For combinatorial optimi zation\, algorithms need to be adapted based on parameter estimation schem es\, and at the same time\, need to provide near-optimal performance guara ntees even under uncertain parameters. For mathematical programming models \, efficient solution methods are required to deal with added complexity f rom applying data-driven frameworks. In this talk\, we discuss test score algorithms for stochastic utility maximization and talk about distribution ally robust chance-constrained programming.\nTest score algorithms are bas ed on carefully designed score metrics to evaluate individual items\, call ed test scores\, defined as a statistic of observed individual item perfor mance data. Algorithms based on individual item scores are practical when evaluating different combinations of items is difficult. We show that a na tural greedy algorithm that selects items solely based on their test score s outputs solutions within a constant factor of the optimum for a broad cl ass of utility functions. Our algorithms and approximation guarantees assu me that test scores are noisy estimates of certain expected values with re spect to marginal distributions of individual item values\, thus making ou r algorithms practical.For the second part of the talk\, we consider distr ibutionally robust optimization (DRO) frameworks\, which allow interpolati ng between traditional robust optimization and stochastic optimization\, t hereby providing a systematic way of hedging against the ambiguity in unde rlying probability distributions. In particular\, we apply the DRO framewo rk defined with the Wasserstein distance to chance-constrained programming (CCP)\, an optimization paradigm that involves constraints that have to b e satisfied with high probability. We develop formulations by revealing hi dden connections between the Wasserstein DRO framework and its nominal cou nterpart (the sample average approximation)\, and propose integer programm ing based solution methods. Our formulations significantly scale up the pr oblem sizes that can be handled by reducing the solution times from hours to seconds\,compared to the existing formulations.\nThis talk is based on joint works with Nam Ho-Nguyen\, Fatma Kılınç-Karzan\, Simge Küçükyavuz\, MilanVojnovic\, and Se-young Yun.\nFor zoom information email Meg Tully – mtully4@jhu.edu DTSTART;TZID=America/New_York:20210120T090000 DTEND;TZID=America/New_York:20210120T100000 SEQUENCE:0 SUMMARY:Special Seminar – Faculty Candidate Debeen Lee URL:https://engineering.jhu.edu/ams/events/special-seminar-faculty-candidat e-debeen-lee/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
** – Data-driven optimization: test score algorithms and distributi
onally robust approach

**Abstract **

The ever -increasing availability of data has motivated novel optimization models t hat can incorporate uncertainty in problem parameters. Data-driven optimiz ation frameworks integrate sophisticated statistical estimation parts into optimization frameworks. This often leads to computationally challenging formulations that mandate new technologies to address scalability issues. For combinatorial optimization\, algorithms need to be adapted based on pa rameter estimation schemes\, and at the same time\, need to provide near-o ptimal performance guarantees even under uncertain parameters. For mathema tical programming models\, efficient solution methods are required to deal with added complexity from applying data-driven frameworks. In this talk\ , we discuss test score algorithms for stochastic utility maximization and talk about distributionally robust chance-constrained programming.

\n< p>Test score algorithms are based on carefully designed score metrics to e valuate individual items\, called test scores\, defined as a statistic of observed individual item performance data. Algorithms based on individual item scores are practical when evaluating different combinations of items is difficult. We show that a natural greedy algorithm that selects items s olely based on their test scores outputs solutions within a constant facto r of the optimum for a broad class of utility functions. Our algorithms an d approximation guarantees assume that test scores are noisy estimates of certain expected values with respect to marginal distributions of individu al item values\, thus making our algorithms practical.For the second part of the talk\, we consider distributionally robust optimization (DRO) frame works\, which allow interpolating between traditional robust optimization and stochastic optimization\, thereby providing a systematic way of hedgin g against the ambiguity in underlying probability distributions. In partic ular\, we apply the DRO framework defined with the Wasserstein distance to chance-constrained programming (CCP)\, an optimization paradigm that invo lves constraints that have to be satisfied with high probability. We devel op formulations by revealing hidden connections between the Wasserstein DR O framework and its nominal counterpart (the sample average approximation) \, and propose integer programming based solution methods. Our formulation s significantly scale up the problem sizes that can be handled by reducing the solution times from hours to seconds\,compared to the existing formul ations.\nThis talk is based on joint works with Nam Ho-Nguyen\, Fat ma Kılınç-Karzan\, Simge Küçükyavuz\, MilanVojnovic\, and Se-young Yun.

\nFor zoom information email Meg Tully – mtully4@jhu.edu

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28790@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Novel Optimization for Data-Driven Decision Making \nAb stract\nWe present several exciting works on data-driven decision making. First\, during the crisis of COVID-19\, we have seen the importance of cli nical trial designs. Improving clinical trial designs is important for the wellness of all human beings. We first present a novel optimization frame work for adaptive trial design in the context of personalized medicine. Ad aptive enrichment designs involve preplanned rules for modifying enrollmen t criteria based on accruing data in a randomized trial. We focus on desig ns where the overall population is partitioned into two predefined subpopu lations\, e.g.\, based on a biomarker or risk score measured at baseline f or personalized medicine. The goal is to learn which populations benefit f rom an experimental treatment. Two critical components of adaptive enrichm ent designs are the decision rule for modifying enrollment and multiple te sting procedures. We provide a general framework for simultaneously optimi zing these components for two-stage\, adaptive enrichment designs through Bayesian optimization. We minimize the expected sample size under constrai nts on power and the familywise Type I error rate. It is computationally i nfeasible to directly solve this optimization problem due to its nonconvex ity and infinite dimensionality. The key to our approach is a novel\, disc rete representation of this optimization problem as a sparse linear progra m\, which is large-scale but computationally feasible to solve using moder n optimization techniques. Applications of our approach produce new\, appr oximately optimal designs. We then present some other optimal data-driven decision making works on high-dimensional linear contextual bandit and rei nforcement learning problems. DTSTART;TZID=America/New_York:20210121T103000 DTEND;TZID=America/New_York:20210121T113000 SEQUENCE:0 SUMMARY:Special Seminar – Ethan Fang URL:https://engineering.jhu.edu/ams/events/special-seminar-ethan-fang/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
** – Novel Optimization for Data-Driven Decision Making

**Abstract**

We present several exciting works on data-driv en decision making. First\, during the crisis of COVID-19\, we have seen t he importance of clinical trial designs. Improving clinical trial designs is important for the wellness of all human beings. We first present a nove l optimization framework for adaptive trial design in the context of perso nalized medicine. Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accruing data in a randomized tria l. We focus on designs where the overall population is partitioned into tw o predefined subpopulations\, e.g.\, based on a biomarker or risk score me asured at baseline for personalized medicine. The goal is to learn which p opulations benefit from an experimental treatment. Two critical components of adaptive enrichment designs are the decision rule for modifying enroll ment and multiple testing procedures. We provide a general framework for s imultaneously optimizing these components for two-stage\, adaptive enrichm ent designs through Bayesian optimization. We minimize the expected sample size under constraints on power and the familywise Type I error rate. It is computationally infeasible to directly solve this optimization problem due to its nonconvexity and infinite dimensionality. The key to our approa ch is a novel\, discrete representation of this optimization problem as a sparse linear program\, which is large-scale but computationally feasible to solve using modern optimization techniques. Applications of our approac h produce new\, approximately optimal designs. We then present some other optimal data-driven decision making works on high-dimensional linear contextual bandit and reinforcement learning problems.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28794@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title – Polar deconvolution of mixed signals\nAbstract \nThe si gnal demixing problem seeks to separate multiple signals from their superp osition. I will describe a geometric view of the superposition process\, a nd how the duality of convex cones allows us to develop an efficient algor ithm for recovering the components with sublinear iteration complexity and linear storage. Under a random measurement model\, this process stably re covers low-complexity and incoherent signals with high probability and wit h optimal sample complexity. This is joint work with my students and postd ocs Zhenan Fan\, Halyun Jeong\, and Babhru Joshi.\n DTSTART;TZID=America/New_York:20210122T110000 DTEND;TZID=America/New_York:20210122T120000 SEQUENCE:0 SUMMARY:Special Seminar – Michael Friedlander URL:https://engineering.jhu.edu/ams/events/special-seminar-michael-friedlan der/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
** – Polar deconvolution of mixed signals

**Abstract
**

The signal demixing problem seeks to separate multiple s ignals from their superposition. I will describe a geometric view of the s uperposition process\, and how the duality of convex cones allows us to de velop an efficient algorithm for recovering the components with sublinear iteration complexity and linear storage. Under a random measurement model\ , this process stably recovers low-complexity and incoherent signals with high probability and with optimal sample complexity. This is joint work wi th my students and postdocs Zhenan Fan\, Halyun Jeong\, and Babhru Joshi.< /p>\n

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28897@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:TITLE: Short simplex paths in lattice polytopes\nABSTRACT: We d esign a simplex algorithm for linear programs on lattice polytopes that tr aces “short” simplex paths from any given vertex to an optimal one. We con sider a lattice polytope P contained in [0\, k]^n and defined via ‘m’ line ar inequalities. Our first contribution is a simplex algorithm that reache s an optimal vertex by tracing a path along the edges of P of length in O( n^4 k log(n k)). The length of this path is independent of ‘m’ and is only polynomially far from the worst-case diameter\, which roughly grows as nk . \nMotivated by the fact that most known lattice polytopes are defined vi a 0\,+1\,-1 constraint matrices\, our second contribution is a more sophis ticated simplex algorithm which exploits the largest absolute value of the entries in the constraint matrix\, denoted by ‘a’. We show that the lengt h of the simplex path generated by this algorithm is in O(n^2 k log(n k a) ). In particular\, if the parameter ‘a’ is bounded by a polynomial in n\, k\, then the length of the simplex path is in O(n^2 k log(n k)). This is a joint work with Alberto Del Pia.\n \nThe cloud recording is now available .\nTopic: AMS Department Seminar (Spring 2021)\nDate: Feb 4\, 2021 01:19 P M Eastern Time (US and Canada)\nRecording for viewers:\nhttps://wse.zoom.u s/rec/share/mr4m196sKhWTcUV4TYKMWcT1MUxgUI7KdFoUhRxxrPBqew_OVKX0X7kG_Lee4j KN.kCmCor9FOsMzTuL5 Passcode: qB4+9D6q\nEnjoy. DTSTART;TZID=America/New_York:20210204T133000 DTEND;TZID=America/New_York:20210204T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Carla Michini (University of Wisconsin-Madison) on Z oom URL:https://engineering.jhu.edu/ams/events/28897-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

< strong>TITLE: Short simplex paths in lattice polytopes

\n**ABSTRACT: **We design a
simplex algorithm for linear programs on lattice polytopes that traces “s
hort” simplex paths from any given vertex to an optimal one. We consider a
lattice polytope P contained in [0\, k]^n and defined via ‘m’ linear ineq
ualities. Our first contribution is a simplex algorithm that reaches an op
timal vertex by tracing a path along the edges of P of length in O(n^4 k l
og(n k)). The length of this path is independent of ‘m’ and is only polyno
mially far from the worst-case diameter\, which roughly grows as nk.

Motivated by the fact that most known lattice polyt opes are defined via 0\,+1\,-1 constraint matrices\, our second contributi on is a more sophisticated simplex algorithm which exploits the largest ab solute value of the entries in the constraint matrix\, denoted by ‘a’. We show that the length of the simplex path generated by this algorithm is in O(n^2 k log(n k a)). In particular\, if the parameter ‘a’ is bounded by a polynomial in n\, k\, then the length of the simplex path is in O(n^2 k l og(n k)). This is a joint work with Alberto Del Pia.

\n\n

The cloud recording is now available.

\nTopic: AMS Department S
eminar (Spring 2021)

\nDate: Feb 4\, 2021 01:19 PM Eastern Time (US a
nd Canada)

Recording for viewers:

\nhttps://wse.zoom.us/rec/share/mr4m196sKhWTcUV4TYKMW
cT1MUxgUI7KdFoUhRxxrPBqew_OVKX0X7kG_Lee4jKN.kCmCor9FOsMzTuL5 Passcode:
qB4+9D6q

Enjoy.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28903@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Quantile-based Iterative Methods for Corrupted Systems o f Linear Equations\nAbstract: One of the most ubiquitous problems arising across the sciences is that of solving large-scale systems of linear equat ions Ax = b. When it is infeasible to solve the system directly by inversi on\, scalable and efficient projection-based iterative methods can be used instead\, such as\, Randomized Kaczmarz (RK) algorithm\, or SGD (optimizi ng ||Ax – b|| in some norm).\nThe main goal of my talk is to present versi ons of these two algorithms aimed at linear systems with adversarially cor rupted vector b\, QuantileRK and QuantileSGD. While the classical approach for noisy (inconsistent) systems is to show that the iterations approach the least squares solution until a certain convergence horizon that depend s on the noise size\, in order to handle large\, sparse\, potentially adve rsarial corruptions\, one needs to modify the algorithm to avoid corruptio ns rather than try to tolerate them — and quantiles of the residual provid e a natural way to do so. Our methods work on up to 50% of incoherent corr uptions\, and up to 20% of adversarial corruptions (that consistently crea te an “alternative” solution of the system). Theoretically\, under some st andard assumptions on the measurement model\, despite corruptions of any s ize both methods converge to the true solution with exactly the same rate as RK on an uncorrupted system up to an absolute constant. Our theoretical analysis is based on probabilistic concentration of measure results\, and as an auxiliary random matrix theory result\, we prove a non-trivial unif orm bound for the smallest singular values of all submatrices of a given m atrix. Based on the joint work with Jamie Haddock\, Deanna Needell\, and W ill Swartworth.\n \nThe cloud recording is now available.\nTopic: AMS Depa rtment Seminar (Spring 2021)\nDate: Feb 11\, 2021 12:49 PM Eastern Time (U S and Canada)\nShare recording with viewers:\nhttps://wse.zoom.us/rec/shar e/Dn4mMX4n4CMPKWUQfXql6Trt4yurYBxPe37OkFowp0Fcxp3MfNEsnczBxr68roig.90xVqiC 1mKI-98wl Passcode: Jx*#6Lko\nEnjoy. DTSTART;TZID=America/New_York:20210211T133000 DTEND;TZID=America/New_York:20210211T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Liza Rebrova (University of California\, Los Angeles ) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-liza-rebrova-unive rsity-of-california-los-angeles-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Quantile-based Iterative Methods for Corrupted Systems of Linea
r Equations

**Abstract: **One of the most ubiquitous
problems arising across the sciences is that of solving large-scale system
s of linear equations Ax = b. When it is infeasible to solve the system di
rectly by inversion\, scalable and efficient projection-based iterative me
thods can be used instead\, such as\, Randomized Kaczmarz (RK) algorithm\,
or SGD (optimizing ||Ax – b|| in some norm).

The main goal of my talk is to present versions of these two algorithms aimed at linear system s with adversarially corrupted vector b\, QuantileRK and QuantileSGD. Whil e the classical approach for noisy (inconsistent) systems is to show that the iterations approach the least squares solution until a certain converg ence horizon that depends on the noise size\, in order to handle large\, s parse\, potentially adversarial corruptions\, one needs to modify the algo rithm to avoid corruptions rather than try to tolerate them — and quantile s of the residual provide a natural way to do so. Our methods work on up t o 50% of incoherent corruptions\, and up to 20% of adversarial corruptions (that consistently create an “alternative” solution of the system). Theor etically\, under some standard assumptions on the measurement model\, desp ite corruptions of any size both methods converge to the true solution wit h exactly the same rate as RK on an uncorrupted system up to an absolute c onstant. Our theoretical analysis is based on probabilistic concentration of measure results\, and as an auxiliary random matrix theory result\, we prove a non-trivial uniform bound for the smallest singular values of all submatrices of a given matrix. Based on the joint work with Jamie Haddock\ , Deanna Needell\, and Will Swartworth.

\n\n

The cloud record ing is now available.

\nTopic: AMS Department Seminar (Spring 2021)< br />\nDate: Feb 11\, 2021 12:49 PM Eastern Time (US and Canada)

\nS
hare recording with viewers:

\nhttps://wse.zoom.us/rec/share/Dn4mMX4n4CMPKWUQfXql6Trt4yurYBxPe3
7OkFowp0Fcxp3MfNEsnczBxr68roig.90xVqiC1mKI-98wl Passcode: Jx*#6Lko

Enjoy.

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28905@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Data Integration: Data-Driven Discovery from Diverse Dat a Sources\nAbstract: Data integration\, or the strategic analysis of multi ple sources of data simultaneously\, can often lead to discoveries that ma y be hidden in individual analyses of a single data source. In this talk\ , we present several new techniques for data integration of mixed\, multi- view data where multiple sets of features\, possibly each of a different d omain\, are measured for the same set of samples. This type of data is co mmon in healthcare\, biomedicine\, national security\, multi-senor recordi ngs\, multi-modal imaging\, and online advertising\, among others. In this talk\, we specifically highlight how mixed graphical models and new featu re selection techniques for mixed\, multi-view data allow us to explore re lationships amongst features from different domains. Next\, we present ne w frameworks for integrated principal components analysis and integrated g eneralized convex clustering that leverage diverse data sources to discove r joint patterns amongst the samples. We apply these techniques to integr ative genomic studies in cancer and neurodegenerative diseases to make sci entific discoveries that would not be possible from analysis of a single d ata set.\nHere is the new link and meeting ID+passcode:\nhttps://wse.zoom. us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09\nMeeting ID: 914 673 7 5713\nPasscode: 272254\n DTSTART;TZID=America/New_York:20210218T133000 DTEND;TZID=America/New_York:20210218T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Genevera Allen (Rice University) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-genevera-allen-ric e-university-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Data Integration: Data-Driven Discovery from Diverse Data Sourc
es

**Abstract:** Data integration\, or the strategic
analysis of multiple sources of data simultaneously\, can often lead to di
scoveries that may be hidden in individual analyses of a single data sourc
e. In this talk\, we present several new techniques for data integration
of mixed\, multi-view data where multiple sets of features\, possibly each
of a different domain\, are measured for the same set of samples. This t
ype of data is common in healthcare\, biomedicine\, national security\, mu
lti-senor recordings\, multi-modal imaging\, and online advertising\, amon
g others. In this talk\, we specifically highlight how mixed graphical mod
els and new feature selection techniques for mixed\, multi-view data allow
us to explore relationships amongst features from different domains. Nex
t\, we present new frameworks for integrated principal components analysis
and integrated generalized convex clustering that leverage diverse data s
ources to discover joint patterns amongst the samples. We apply these tec
hniques to integrative genomic studies in cancer and neurodegenerative dis
eases to make scientific discoveries that would not be possible from analy
sis of a single data set.

Here is the new link and meeting ID+pass code:

\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS8 0NnpwZUFRUzRWUT09

\nMeeting ID: 914 6737 5713

\nPasscode: 272254

\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28911@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Statistical Approaches to Studying Air Pollution Mixture s and Health\nAbstract: The control of ambient air quality in the United S tates has been a major public health success since the passing of the Clea n Air Act\, with particulate matter (PM) reductions resulting in an estima ted 160\,000 premature deaths prevented in 2010 alone. Currently\, public policy is oriented around lowering the levels of individual pollutants and this focus has driven the nature of much epidemiological research. Recent ly\, attention has been given to viewing air pollution as a complex mixtur e and to developing a multi-pollutant approach to controlling ambient conc entrations. We discuss current approaches to studying air pollution mixtur es and detail their strengths and weaknesses. We also present a new statis tical method for estimating the health effects of environmental mixtures u sing a mixture-altering contrast\, which is any comparison\, intervention\ , policy\, or natural experiment that changes a mixture’s composition. As a demonstration\, we apply this approach to assess the health effects of w ildfire particulate matter air pollution in the Western United States.\nBi o: Dr. Roger D. Peng is a Professor of Biostatistics at the Johns Hopkins Bloomberg School of Public Health where his research focuses on the develo pment of statistical methods for addressing environmental health problems. He has led some of the largest national studies on the health effects of ambient air pollution in the United States. Dr. Peng is the author of the popular book R Programming for Data Science and 10 other books on data sci ence and statistics. He is also the co-creator of the Johns Hopkins Data S cience Specialization\, the Simply Statistics blog where he writes about s tatistics for the public\, the Not So Standard Deviations podcast with Hil ary Parker\, and The Effort Report podcast with Elizabeth Matsui. Dr. Peng is a Fellow of the American Statistical Association and is the recipient of the Mortimer Spiegelman Award from the American Public Health Associati on\, which honors a statistician who has made outstanding contributions to public health.\n \nHere is the new link and meeting ID+passcode:\nhttps:/ /wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09\nMeeting I D: 914 6737 5713\nPasscode: 272254 DTSTART;TZID=America/New_York:20210225T133000 DTEND;TZID=America/New_York:20210225T143000 SEQUENCE:0 SUMMARY:The John C. & Susan S.G. Wierman Lecture Series- AMS Seminar w/ Dr. Roger Peng (JHU Biostatistics) on Zoom URL:https://engineering.jhu.edu/ams/events/the-john-c-susan-s-g-wierman-lec ture-series-ams-seminar-w-roger-peng-jhu-biostatistics-on-zoom/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://engineering.jhu.edu/ams/wp-content/uploa ds/2021/02/rpeng_headshot4_sq-150x150.png\;150\;150\;1\,medium\;https://en gineering.jhu.edu/ams/wp-content/uploads/2021/02/rpeng_headshot4_sq-300x30 0.png\;300\;300\;1\,large\;https://engineering.jhu.edu/ams/wp-content/uplo ads/2021/02/rpeng_headshot4_sq.png\;960\;960\; X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title: **Statistical Approaches to St
udying Air Pollution Mixtures and Health

**Abstract: The control of ambient air quality in the United States has been a major
public health success since the passing of the Clean Air Act\, with parti
culate matter (PM) reductions resulting in an estimated 160\,000 premature
deaths prevented in 2010 alone. Currently\, public policy is oriented aro
und lowering the levels of individual pollutants and this focus has driven
the nature of much epidemiological research. Recently\, attention has bee
n given to viewing air pollution as a complex mixture and to developing a
multi-pollutant approach to controlling ambient concentrations. We discuss
current approaches to studying air pollution mixtures and detail their st
rengths and weaknesses. We also present a new statistical method for estim
ating the health effects of environmental mixtures using a mixture-alterin
g contrast\, which is any comparison\, intervention\, policy\, or natural
experiment that changes a mixture’s composition. As a demonstration\, we a
pply this approach to assess the health effects of wildfire particulate ma
tter air pollution in the Western United States.**

**Bio: Dr. Roger D. Peng is a Professor of Biostatistics at the Johns Hopkin
s Bloomberg School of Public Health where his research focuses on the deve
lopment of statistical methods for addressing environmental health problem
s. He has led some of the largest national studies on the health effects o
f ambient air pollution in the United States. Dr. Peng is the author of th
e popular book R Programming for Data Science and 10 other books on data s
cience and statistics. He is also the co-creator of the Johns Hopkins Data
Science Specialization\, the Simply Statistics blog where he writes about
statistics for the public\, the Not So Standard Deviations podcast with H
ilary Parker\, and The Effort Report podcast with Elizabeth Matsui. Dr. Pe
ng is a Fellow of the American Statistical Association and is the recipien
t of the Mortimer Spiegelman Award from the American Public Health Associa
tion\, which honors a statistician who has made outstanding contributions
to public health.**

\n

Here is the new link and meeting ID+pa sscode:

\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIW S80NnpwZUFRUzRWUT09

\nMeeting ID: 914 6737 5713

\nPasscode : 272254

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28920@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Denoising as a Building Block: Form\, function\, and reg ularization of inverse problems\nAbstract: Denoising of images has reached impressive levels of quality — almost as good as we can ever hope. There are thousands of papers on this topic\, and their scope is so vast and app roaches so diverse that putting them in some order is useful and challengi ng. I will speak about why we should still care deeply about this topic\, what we can say about this general class of operators on images\, and what makes them so special. Of particular interest is how we can use denoisers as building blocks for broader image processing tasks\, including as regu larizers for general inverse problems. \nBio: Peyman is a Principal Scient ist / Director at Google Research\, where he leads the Computational Imagi ng team. Prior to this\, he was a Professor of Electrical Engineering at U C Santa Cruz from 1999-2014. He was Associate Dean for Research at the Sch ool of Engineering from 2010-12. From 2012-2014 he was on leave at Google- x\, where he helped develop the imaging pipeline for Google Glass. Most re cently\, Peyman’s team at Google developed the digital zoom pipeline for t he Pixel phones\, which includes the multi-frame super-resolution (“Super Res Zoom”) pipeline\, and the RAISR upscaling algorithm. In addition\, th e Night Sight mode on Pixel 3 uses our Super Res Zoom technology to merge images (whether you zoom or not) for vivid shots in low light.\nPeyman rec eived his undergraduate education in electrical engineering and mathematic s from the University of California\, Berkeley\, and the MS and PhD degree s in electrical engineering from the Massachusetts Institute of Technology . He holds 15 patents\, several of which are commercially licensed. He fou nded MotionDSP\, which was acquired by Cubic Inc. (NYSE:CUB).\nPeyman has been keynote speaker at numerous technical conferences including Picture C oding Symposium (PCS)\, SIAM Imaging Sciences\, SPIE\, and the Internation al Conference on Multimedia (ICME). Along with his students\, he has won s everal best paper awards from the IEEE Signal Processing Society.\nHe is a Distinguished Lecturer of the IEEE Signal Processing Society\, and a Fell ow of the IEEE “for contributions to inverse problems and super-resolution in imaging.”\nhttp://www.milanfar.org\n \nHere is the new link and meetin g ID+passcode:\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80Nnpw ZUFRUzRWUT09\nMeeting ID: 914 6737 5713\nPasscode: 272254\n \n DTSTART;TZID=America/New_York:20210304T133000 DTEND;TZID=America/New_York:20210304T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Peyman Milanfar (Google) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-peyman-milanfar-go ogle-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **Denoising as a Building Block: Form\, function\, and
regularization of inverse problems

**Abstract: **Denoising of images has reached impressive levels of quality — almost a
s good as we can ever hope. There are thousands of papers on this topic\,
and their scope is so vast and approaches so diverse that putting them in
some order is useful and challenging. I will speak about why we should sti
ll care deeply about this topic\, what we can say about this general class
of operators on images\, and what makes them so special. Of particular in
terest is how we can use denoisers as building blocks for broader image pr
ocessing tasks\, including as regularizers for general inverse problems. <
/span>

**Bio**: Peyman is a Principal Scientist / Dir
ector at Google Research\, where he leads the Computational Imaging team.
Prior to this\, he was a Professor of Electrical Engineering at UC Santa C
ruz from 1999-2014. He was Associate Dean for Research at the School of En
gineering from 2010-12. From 2012-2014 he was on leave at Google-x\, where
he helped develop the imaging pipeline for Google Glass. Most recently\,
Peyman’s team at Google developed the digital zoom pipeline for the Pixel
phones\, which includes the multi-frame super-resolution (“Super Res Zoom”
) pipeline\, and the RAISR upscaling algorithm. In addition\, the Night S
ight mode on Pixel 3 uses our Super Res Zoom technology to merge images (w
hether you zoom or not) for vivid shots in low light.

Peyman recei ved his undergraduate education in electrical engineering and mathematics from the University of California\, Berkeley\, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He holds 15 patents\, several of which are commercially licensed. He found ed MotionDSP\, which was acquired by Cubic Inc. (NYSE:CUB).

\nPeyman has been keynote speaker at numerous technical conferences including Pict ure Coding Symposium (PCS)\, SIAM Imaging Sciences\, SPIE\, and the Intern ational Conference on Multimedia (ICME). Along with his students\, he has won several best paper awards from the IEEE Signal Processing Society.

\nHe is a Distinguished Lecturer of the IEEE Signal Processing Society\ , and a Fellow of the IEEE “for contributions to inverse problems and supe r-resolution in imaging.”

\n\n\n

Here is the new link and meeting ID+passcode:

\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80Nnpw ZUFRUzRWUT09

\nMeeting ID: 914 6737 5713

\nPasscode: 27225 4

\n\n

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28927@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Real-time Data Fusion to Guide Disease Forecasting Model s\nAbstract: Globalization has created complex problems that can no longer be adequately understood and mitigated using traditional data analysis te chniques and data sources. As such\, there is a need for the integration o f nontraditional data streams and approaches such as social media and mach ine learning to address these new challenges. In this talk\, I will discus s how our team is applying approaches from the weather forecasting communi ty including data collection\, assimilating heterogeneous data streams int o models\, and quantifying uncertainty to forecast infectious diseases lik e COVID-19. In addition\, I will demonstrate that although epidemic forec asting is still in its infancy\, it’s a growing field with great potential and mathematical modeling will play a key role in making this happen.\n \nHere is the new link and meeting ID+passcode:\nhttps://wse.zoom.us/j/914 67375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09\nMeeting ID: 914 6737 5713\n Passcode: 272254 DTSTART;TZID=America/New_York:20210311T133000 DTEND;TZID=America/New_York:20210311T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Sara Del Valle (Los Alamos National Labs) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-sara-del-valle-los -alamos-national-labs-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Real-time Data Fusion to Guide Disease Forecasting Models

**Abstract: **Globalization has created complex problems t
hat can no longer be adequately understood and mitigated using traditional
data analysis techniques and data sources. As such\, there is a need for
the integration of nontraditional data streams and approaches such as soci
al media and machine learning to address these new challenges. In this tal
k\, I will discuss how our team is applying approaches from the weather fo
recasting community including data collection\, assimilating heterogeneous
data streams into models\, and quantifying uncertainty to forecast infect
ious diseases like COVID-19. In addition\, I will demonstrate that althou
gh epidemic forecasting is still in its infancy\, it’s a growing field wit
h great potential and mathematical modeling will play a key role in making
this happen.

\n

Here is the new link and meeting ID+passco de:

\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80N npwZUFRUzRWUT09

\nMeeting ID: 914 6737 5713

\nPasscode: 27 2254

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28933@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Memory lower bounds for sampling\nAbstract: Suppose we w ould like to maintain a (multi)subset S of {1\,…\,n} dynamically subject t o items being inserted into and deleted from S. Then when a user says “sam ple()”\, we should return a (uniformly) random element of S\, or an easier task\, return just some (any) element in S. How much memory is required t o accomplish this task? We answer\nthis question by giving an asymptotical ly optimal lower bound on the memory required.\nJoint work with Michael Ka pralov\, Jakub Pachocki\, Zhengyu Wang\, David\nP. Woodruff\, and Mobin Ya hyazadeh.\n \nHere is the recording from the seminar:\nhttps://wse.zoom.us /rec/share/XXuuP3BmXqWAExrgjkyYorfi0dYfSS9q1ldhiI_5gGGZAFHnBqyyDqLOO0IOrWw t.PHPHN5296_mER36h\nPasscode: T&^!!4=C DTSTART;TZID=America/New_York:20210318T133000 DTEND;TZID=America/New_York:20210318T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Jelani Nelson ( University of California\, Berkeley) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-jelani-nelson-univ ersity-of-california-berkeley-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Memory lower bounds for sampling

**Abstract: Suppose we would like to maintain a (multi)subset S of {1\,…\,n} dyna
mically subject to items being inserted into and deleted from S. Then when
a user says “sample()”\, we should return a (uniformly) random element of
S\, or an easier task\, return just some (any) element in S. How much mem
ory is required to accomplish this task? We answer\nthis question by
giving an asymptotically optimal lower bound on the memory required.**

Joint work with Michael Kapralov\, Jakub Pachocki\, Zhengyu Wang\, Da
vid

\nP. Woodruff\, and Mobin Yahyazadeh.

\n

Here is t he recording from the seminar:

\n\nPasscode: T &^!!4=C

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28939@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: On learning kernels for numerical approximation and lear ning.\nAbstract: There is a growing interest in solving numerical approxim ation problems as learning problems. Popular approaches can be divided int o (1) Kernel methods\, and (2) methods based on variants of Artificial Neu ral Networks. We illustrate the importance of using adapted kernels in ker nel methods and discuss strategies for learning kernels from data. We sho w how ANN methods can be formulated and analyzed as (1) kernel methods wit h warping kernels learned from data\, and (2) discretized solvers for a ge neralization of image registration algorithms in which images are replaced by high dimensional shapes.\n \nHere is the recording for the meeting:\nh ttps://wse.zoom.us/rec/share/icMPFsgd_Rsz0AK_w2SyxmXmtZ1LXnbJ7btTerUEERXVs MzqRIyMJ2_KqhD2IMWf.hvESTcAWBjAAwdg6\nPasscode: ?tW.6T+%\n DTSTART;TZID=America/New_York:20210325T133000 DTEND;TZID=America/New_York:20210325T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Houman Owhadi (Caltech) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-houman-owhadi-calt ech-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** On learning kernels for numerical approximation and learning.\n

**Abstract: **There is a growing interest in solving n
umerical approximation problems as learning problems. Popular approaches c
an be divided into (1) Kernel methods\, and (2) methods based on variants
of Artificial Neural Networks. We illustrate the importance of using adapt
ed kernels in kernel methods and discuss strategies for learning kernels f
rom data. We show how ANN methods can be formulated and analyzed as (1) k
ernel methods with warping kernels learned from data\, and (2) discretized
solvers for a generalization of image registration algorithms in which im
ages are replaced by high dimensional shapes.

\n

Here is th e recording for the meeting:

\n\nPasscode: ?tW .6T+%

\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28941@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Nature vs. Nurture in Complex (and Not-So-Complex) Syste ms\nAbstract: Understanding the dynamical behavior of many-particle system s following a deep quench is a central issue in both statistical mechanics and complex systems theory. One of the basic questions centers on the iss ue of predictability: given a system with a random initial state evolving through a well-defined stochastic dynamics\, how much of the information c ontained in the state at future times depends on the initial condition (“n ature”) and how much on the dynamical realization (“nurture”)? We discuss this question and present both old and new results for both homogeneous an d random systems in low and high dimension.\nStarting from next week\, I’l l be taking over the seminar hosting duties from Amitabh\, who is going on paternity leave. We’ll keep the zoom link and all other procedures exactl y the same as they are now. Amitabh has created a well-oiled machine!\n \n Here is the recording from the seminar above:\nhttps://wse.zoom.us/rec/sha re/t0mGsIgM5fFxKqOSN-pR4b8YHGfVikbJ7DYP8NUkspDaSo4d3XPFE0gF7RxxtRib.-aaQTe F6uRB2tbQL\nPasscode: ^R+Q1=r3 DTSTART;TZID=America/New_York:20210401T133000 DTEND;TZID=America/New_York:20210401T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Daniel Stein (New York University) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-daniel-stein-new-y ork-university-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n

**Title
:** Nature vs. Nurture in Complex (and Not-So-Complex) Systems

**Abstract: **Understanding the dynamical behavior of ma
ny-particle systems following a deep quench is a central issue in both sta
tistical mechanics and complex systems theory. One of the basic questions
centers on the issue of predictability: given a system with a random initi
al state evolving through a well-defined stochastic dynamics\, how much of
the information contained in the state at future times depends on the ini
tial condition (“nature”) and how much on the dynamical realization (“nurt
ure”)? We discuss this question and present both old and new results for b
oth homogeneous and random systems in low and high dimension.

Star ting from next week\, I’ll be taking over the seminar hosting duties from Amitabh\, who is going on paternity leave. We’ll keep the zoom link and al l other procedures exactly the same as they are now. Amitabh has created a well-oiled machine!

\n\n

Here is the recording from the semi nar above:

\n\nPasscode: ^R+Q1=r3

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28943@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Phase Analysis of a Family of Reaction-Diffusion Equatio ns\n \nAbstract: We consider a reaction-diffusion equation driven by multi plicative space-time white noise\, for a large class of reaction terms tha t include well-known examples such as the Fisher-KPP and Allen-Cahn equati ons. We prove that\, in the “intermittent regime”: (1) If the equation is sufficiently noisy\, then the resulting stochastic PDE has a unique invari ant measure\; and (2) If the equation is in a low-noise regime\, then ther e are infinitely many invariant measures and the collection of all invaria nt measures is a line segment in path space. This gives proof to earlier p redictions of Zimmerman et al (2000)\, discovered first through experiment s and computer simulations.\nThis is joint work with Carl Mueller (Univers ity of Rochester) and Kunwoo Kim (POSTECH).\n \nHere is the new link and m eeting ID+passcode:\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS8 0NnpwZUFRUzRWUT09\nMeeting ID: 914 6737 5713\nPasscode: 272254 DTSTART;TZID=America/New_York:20210408T133000 DTEND;TZID=America/New_York:20210408T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Davar Khosnevisan (University of Utah) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-davar-khosnevisan- university-of-utah-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
: **Phase Analysis of a Family of Reaction-Diffusion Equations

\n

**Abstract: **We consider a reaction-diffusion
equation driven by multiplicative space-time white noise\, for a large cl
ass of reaction terms that include well-known examples such as the Fisher-
KPP and Allen-Cahn equations. We prove that\, in the “intermittent regime”
: (1) If the equation is sufficiently noisy\, then the resulting stochasti
c PDE has a unique invariant measure\; and (2) If the equation is in a low
-noise regime\, then there are infinitely many invariant measures and the
collection of all invariant measures is a line segment in path space. This
gives proof to earlier predictions of Zimmerman et al (2000)\, discovered
first through experiments and computer simulations.

This is joint work with Carl Mueller (University of Rochester) and Kunwoo Kim (POSTECH) .

\n\n

Here is the new link and meeting ID+passcode:

\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWU T09

\nMeeting ID: 914 6737 5713

\nPasscode: 272254

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28945@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: Finding a Compositional Square Root of Sine\nAbstract: W e consider the following type of problem: Given a function g : A ! A\, fin d a\nfunction f such that g = f f . We are especially interested in the c ase sin : R ! R\, but\nconsider the problem more broadly with results for other functions g defined on other sets\nA. This is joint work with JHU un dergraduate Tongtong Chen. And\, despite appearances\nto the contrary\, th is is a graph theory talk.\n \nHere is the new link and meeting ID+passcod e:\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09 \nMeeting ID: 914 6737 5713\nPasscode: 272254 DTSTART;TZID=America/New_York:20210415T133000 DTEND;TZID=America/New_York:20210415T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Ed Scheinerman (JHU-AMS) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-ed-scheinerman-jhu -ams-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** Finding a Compositional Square Root of Sine

**Abs
tract:** We consider the following type of problem: Given a functio
n g : A ! A\, find a

\nfunction f such that g = f f . We are especia
lly interested in the case sin : R ! R\, but

\nconsider the problem m
ore broadly with results for other functions g defined on other sets

\nA. This is joint work with JHU undergraduate Tongtong Chen. And\, despit
e appearances

\nto the contrary\, this is a graph theory talk.

Here is the new link and meeting ID+passcode:

\nh ttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09< /p>\n

Meeting ID: 914 6737 5713

\nPasscode: 272254

\n END:VEVENT BEGIN:VEVENT UID:ai1ec-28948@engineering.jhu.edu/ams DTSTAMP:20210419T013319Z CATEGORIES: CONTACT: DESCRIPTION:Title: TBA\n \nAbstract: TBA\n \nHere is the new link and meeti ng ID+passcode:\nhttps://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80Nnp wZUFRUzRWUT09\nMeeting ID: 914 6737 5713\nPasscode: 272254 DTSTART;TZID=America/New_York:20210429T133000 DTEND;TZID=America/New_York:20210429T143000 SEQUENCE:0 SUMMARY:AMS Seminar w/ Jim Gatheral (Baruch College) on Zoom URL:https://engineering.jhu.edu/ams/events/ams-seminar-w-jim-gatheral-baruc h-college-on-zoom/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n**Title
:** TBA

\n

**Abstract:** TBA

< /p>\n

Here is the new link and meeting ID+passcode:

\nhttps ://wse.zoom.us/j/91467375713?pwd=VjN3ekZTRFZIWS80NnpwZUFRUzRWUT09

\nMeeting ID: 914 6737 5713

\nPasscode: 272254

\n END:VEVENT END:VCALENDAR