**Problem of the Week**

**Math Club**

**BUGCAT 2019**

**DST and GT Day**

**Number Theory Conf.**

**Zassenhaus Conference**

**Hilton Memorial Lecture**

seminars:colloquium

Unless stated otherwise, colloquia are scheduled for Thursdays 4:15-5:15pm in WH-100E with refreshments served from 4:00-4:25 pm in WH-102.

Organizers: Vladislav Kargin, Cary Malkiewich, Anton Schick, and Adrian Vasiu

Coming up: Guifang Fu, February 13. Title: A New Statistical Framework to Identify Influential Genetic and Environmental Variables Associated with Shape Variation.

**February 4, 4:40 pm (This is a Monday and a different starting time)**

*Speaker*: ** Fangfang Wang ** (University of Wisconsin at Madison)

*Topic*: Statistical Modelling of Multivariate Time Series of Counts

*Abstract*:
In this presentation, I will talk about a new parameter-driven model for non-stationary multivariate time series of counts. The mean process is foPace will pick up now. rmulated as the product of modulating factors and unobserved stationary processes. The former characterizes the long-run movement in the data, while the latter is responsible for rapid fluctuations and other unknown or unavailable covariates. The unobserved stationary vector process is expressed as a linear combination of possibly low-dimensional factors that govern the contemporaneous and serial correlation within and across the count series. Regression coefficients in the modulating factors are estimated via pseudo maximum likelihood estimation, and identification of common factor(s) is carried out through eigen-analysis on a positive definite matrix that aggregates the autocovariances of the count series at nonzero lags. The two-step procedure is fast to compute and easy to implement. Appropriateness of the estimation procedure is theoretically justified, and simulation results corroborate the theoretical findings in finite samples. The model is applied to time series data consisting of the numbers of National Science Foundation funding awarded to seven research universities from January 2001 to December 2012. The estimated parsimonious and easy-to-interpret factor model provides a useful framework for analyzing the interdependencies across the seven institutions.

**February 7, 4:00 pm (Please note the earlier time) **

*Speaker*: ** Yue Zhao ** (University Leuven)

*Topic*: The normal scores estimator for the high-dimensional Gaussian copula model

*Abstract*:
The (semiparametric) Gaussian copula model consists of
distributions that have dependence structure described by Gaussian copulas
but that have arbitrary marginals. A Gaussian copula is in turn
determined by an Euclidean parameter $R$ called the copula correlation
matrix. In this talk we study the normal scores (rank correlation
coefficient) estimator, also known as the van der Waerden coefficient, of
$R$ in high dimensions. It is well known that in fixed dimensions, the
normal scores estimator is the optimal estimator of $R$, i.e., it has the
smallest asymptotic covariance. Curiously though, in high dimensions,
nowadays the preferred estimators of $R$ are usually based on Kendall's
tau or Spearman's rho. We show that the normal scores estimator in fact
remains the optimal estimator of $R$ in high dimensions. More
specifically, we show that the approximate linearity of the normal scores
estimator in the efficient influence function, which in fixed dimensions
implies the optimality of this estimator, holds in high dimensions as
well.

**February 8, 4:15pm (This is a Friday)**

*Speaker*: ** Hai Shu ** (University of Texas MD Anderson Cancer Center)

*Topic*: Extracting Common and Distinctive Signals from High-dimensional Datasets

*Abstract*:
Modern biomedical studies often collect large-scale multi-source/-modal datasets on a common set of objects. A typical approach to the joint analysis of such high-dimensional datasets is to decompose each data matrix into three parts: a low-rank common matrix that captures the shared information across datasets, a low-rank distinctive matrix that characterizes the individual information within the single dataset, and an additive noise matrix. Existing decomposition methods often focus on the orthogonality between the common and distinctive matrices, but inadequately consider a more necessary orthogonal relationship among the distinctive matrices. The latter guarantees that no more shared information is extractable from the distinctive matrices. We propose decomposition-based canonical correlation analysis (D-CCA), a novel decomposition method that defines the common and distinctive matrices from the L2 space of random variables rather than the conventionally used Euclidean space, with a carefully designed orthogonal relationship among the distinctive matrices. The associated estimators of common and distinctive signal matrices are asymptotically consistent and have reasonably better performance than state-of-the-art methods in both simulated data and the analyses of breast cancer genomic datasets from The Cancer Genome Atlas and motor-task functional MRI data from the Human Connectome Project.

**February 13, 4:15 pm (This is a Wednesday)**

*Speaker*: ** Guifang Fu ** (Utah State University)

*Topic*: A New Statistical Framework to Identify Influential Genetic and Environmental Variables Associated with Shape Variation

*Abstract*:
The tremendous diversity of shape is widespread in nature and embodies both a response to and a source of evolution and natural selection. Genes are reported to have an important role in controlling phenotypic variation in shape, and many species exhibit morphological plasticity which allows their shape to adapt to environmental cues. In this talk, I will introduce a new statistical framework to quantify the relative importance of all explanatory variables in terms of the strength of their association with shape variation. The shape is inputted as an image and then described as a multivariate vector or a high dimensional curve. There are unique challenges in variable selection for high dimensional data. Additional challenges arise when modeling multiple correlated components as one unit rather than isolating them one by one, which greatly decreases the prediction error. I will introduce a novel Bayesian multivariate variable selection (BMVS) approach that investigates a large-scale candidate pool to identify influential variables associated with the multivariate shape vector. We integrate the estimation of covariance-related parameters and all regression parameters into one framework through a rapidly updating MCMC procedure. The BMVS approach has been proven to satisfy the strong selection consistency property under certain conditions. We use three simulations to demonstrate that the BMVS approach is empirically accurate, robust, and computationally viable. Numerical comparison indicates that BMVS outperforms some existing approaches such as canonical correlation analysis and multivariate Lasso. We apply the BMVS approach to two rice-related GWAS datasets: the first with 3,254 SNPs related to rice shape, and the second with 36,901 SNPs related to three flowering-time phenotypes. The presented BMVS approach is flexible and can be employed in a wide variety of applications. At the conclusion of the presentation, I will list several future collaboration opportunities that extend from the shape research.

**February 14, 4:15 pm**

*Speaker*: ** Steve Ferry ** (Binghamton University)

*Topic*: The “lost tribes” of manifolds (Gromov's joke)

*Abstract*: This work is joint with John Bryant. In his 1994 ICM talk Shmuel Weinberger, inspired by work of Edwards, Quinn, Cannon, and Bryant-F.-Mio-Weinberger, conjectured the existence of a new collection of spaces with many of the properties of topological manifolds. The authors have constructed spaces in dimensions $n \ge 6$ satisfying many parts of Weinberger's conjecture. Our spaces are finite dimensional and locally contractible. They have the local and global separation properties of topological manifolds, satisfying Alexander duality both locally and globally. They are homogeneous, meaning that for every x and y in a component of one of these spaces there is a homeomorphism carrying x to y. In dimensions $\ge 6$, the h- and s-cobordism theorems hold for these topologically exotic manifolds manifolds.

**February 28, 4:15 pm, in the Atrium of Old Champlain Hall**

*Speaker*: ** Hermann Nicolai ** (Max Planck Institute for Gravitational Physics, Potsdam, Germany)

*Topic*: Symmetry and Unification – can physics be unified into a single formula?

*Abstract*: Attempts to unify the known laws of Nature have a long history. Since the mid-seventies, these attempts have been reinforced with ongoing efforts to reconcile Quantum Mechanics and Einstein's theory of General Relativity into a single unified theory of quantum gravity. In this talk I will review the motivation as well as some more recent developments at an introductory level and discuss prospects for the future.

**April 4, 3:00 pm LH 9.
- Peter Hilton Memorial Lecture
and Dean's Speaker Series in Geometry, Geometric Analysis, and Topology**

Speaker*: Shmuel Weinberger (University of Chicago)
Title: How hard is algebraic topology? Between the constructive and the non.
*

*Abstract: *In algebraic topology one studies geometric problems and problems of constructing and deforming highly nonlinear functions by means of algebra. If one knows that two maps are homotopic (i.e. can be deformed to one another) because a certain calculation says they both lie in the trivial group, then what has one learned? (A striking example of this is Smale's turning the sphere inside out, which now can be seen after much highly nontrivial effort, on youtube.) The question I shall discuss is how hard is it to understand what the algebraic topologists tell us.

*
April 9, 1:15 pm (SPECIAL DAY, PLACE AND TIME) WH-100E, joint with Combinatorics Seminar
*Speaker

Speaker: Gregory Warrington (Vermont)

*
—-
==== Fall 2018 ====
October 18, 4:15 pm
*Speaker

*Abstract*: At liberal arts colleges, with smaller number of students taking statistics, offering advanced level courses can be difficult. Under the Liberal Arts Consortium for Online Learning (LACOL) Upper Level Math Project, I taught an elective course (Bayesian Statistics) through a shared/hybrid model in Fall 2017. Lectures were given in classroom at Vassar with Vassar students present. Each lecture was recorded and shared with both Vassar students and students from other campuses taking the course as an independent study course with a local faculty liaison. I would love to share my experience and thoughts, focusing on 1) what material to move online and how to do so, and 2) how to build up a cross-campus learning community.

*
December 6, 4:15 pm
*Speaker

*Abstract*: This will be a non-technical talk on the life of Harish-Chandra
mainly focussing on his singular evolution as one of the most impactful
mathematicians of the last century. The talk should be accessible to a
wide audience.

*
—–
==== Spring 2018 ====
February 13, 4:15 pm
*Speaker

*Abstract*: A *boundary connected sum* $Q_1\natural Q_2$ of $n$-manifolds is
obtained by gluing $Q_1$ to $Q_2$ along $\left( n-1\right) $-balls in
their respective boundaries. Under mild hypotheses, this gives a well-defined
operation that is commutative, associative, and has an identity element. In
particular (under those hypotheses) the boundary connected sum $\natural
_{i=1}^{k}Q_{i}$ of a finite collection of n-manifolds is topologically
well-defined. This observation fails spectacularly when we attempt to
generalize it to countable collections. In this talk I will discuss a pair of
reasonable (and useful) substitutes for a well-definedness theorem for
infinite boundary connected sums. An application of interest in both manifold
topology and geometric group theory examines aspherical manifolds with exotic,
i.e., not homeomorphic to $\mathbb{R}^{n}$, universal covers. We will describe examples different from those found
in the classical papers by Davis and Davis-Januszkiewicz. Much of this work is
joint with Ric Ancel and Pete Sparks.

*
February 15, 4:15 pm
*Speaker

*Abstract*: We consider nonparametric measurement error density deconvolution subject to heteroscedastic measurement errors as well as symmetry about zero and shape constraints, in particular unimodality. The problem is motivated by genomics applications, where the observed data are
estimated effect sizes from a regression on multiple genetic factors, as occurs in genome-wide association studies and in microarray applications. We exploit the fact that any symmetric and unimodal density can be expressed as a mixture of symmetric uniforms densities, and model the
mixing density using a Dirichlet process location-mixture of Gamma distributions. We do the computations within a Bayesian context, describe a simple scalable implementation that is linear in the sample size, and show that the estimate of the unknown target density is consistent. Within
our application context of regression effect sizes, the target density is likely to have a large probability near zero (the near null effects) coupled with a heavy-tailed distribution (the actual effects). Simulations show that unlike standard deconvolution methods, our Constrained Bayesian
method does a much better job of reconstruction of the target density. An application to a genome-wide association study to predict height shows similar results.

*
February 20, 4:15 pm
*Speaker

*Abstract*: Supergeometry is, roughly, the geometry associated with $\mathbb{Z}_2$-graded algebra. In particular, for an odd element $Q$ of a Lie superalgebra, the two options,
$Q^2\neq 0$ and $Q^2=0$, lead to “supersymmetry” and to “homological vector fields”, respectively.

The “super” notions were originally discovered as a language for describing fermions and bosons in quantum theory on an equal footing. They received their name from supersymmetric models where bosons and fermions are allowed to mix. Their mathematical roots can be traced in classical differential geometry, algebraic topology and homological algebra.

In the talk, I will introduce the basic ideas and describe some interesting results and links with other areas of mathematics. Among them: super de Rham theory and its connection with Radon transform and Gelfand's general hypergeometric equations; universal recurrence relations for super exterior powers and application to Buchstaber-Rees theory of (Frobenius) “n-homomorphisms”; analytic proof of the Atiyah-Singer index theorem; homological vector fields as a universal language for deformation theory and bracket structures (such as homotopy Lie algebras, Lie algebroids, etc.) in mathematics and gauge systems in physics. An intriguing recent result (which started from a counterexample to a conjecture by Witten) concerns volumes of classical supermanifolds such as superspheres, super Stiefel manifolds, projective superspaces, etc. Upon some universal normalization, formulas for these “super” volumes turned out to be analytic continuations of formulas for ordinary manifolds. Another recent development is “microformal geometry”. This, roughly, is a theory that replaces ordinary maps between manifolds by certain “thick morphisms”, which induce non-linear pullbacks on functions, with remarkable properties. This is motivated by application to homotopy Poisson structures; but in general, it suggests a non-linear extension of the fundamental “algebra/geometry duality”. I hope to be able to tell about that as well.

*
March 15, 4:15 pm
*Speaker

*Abstract*: The field of Object Oriented Data Analysis has made a lot of progress on the statistical analysis of the variation in populations of complex objects. A particularly challenging example of this type is populations of tree-structured objects. Deep challenges arise, whose solutions involve a marriage of ideas from statistics, geometry, and numerical analysis, because the space of trees is strongly non-Euclidean in nature. Here these challenges are addressed using the approach of persistent homologies from topological data analysis. The benefits of this data object representation are illustrated using a real data set, where each data point is the tree of blood arteries in one person's brain. Persistent homologies gives much better results than those obtained in previous studies.

*
March 22, 4:15 pm
*Speaker

*Abstract*: CANCELLED

*
March 26, 4:15 pm
*Speaker

*Abstract*: Quantifying the uncertainty of wind energy potential from climate models is a very time-consuming task and requires a considerable amount of computational resources. A statistical model trained on a small set of runs can act as a stochastic approximation of the original climate model, and be used to assess the uncertainty considerably faster than by resorting to the original climate model for additional runs. While Gaussian models have been widely employed as means to approximate climate simulations, the Gaussianity assumption is not suitable for winds at policy-relevant time scales, i.e., sub-annual. We propose a trans-Gaussian model for monthly wind speed that relies on an autoregressive structure with Tukey g-and-h transformation, a flexible new class that can separately model skewness and tail behavior. This temporal structure is integrated into a multi-step spectral framework that is able to account for global nonstationarities across land/ocean boundaries, as well as across mountain ranges. Inference can be achieved by balancing memory storage and distributed computation for a data set of 220 million points. Once fitted with as few as five runs, the statistical model can generate surrogates fast and efficiently on a simple laptop, and provide uncertainty assessments very close to those obtained from all the available climate simulations on a monthly scale. This is joint work with Yuan Yan, Stefano Castruccio, and Marc G. Genton.

*
—-
==== Fall 2017 ====
September 28, 4:30 pm
*Speaker

*Abstract*: The traditional introductory statistics course generally proceeds smoothly until the point where we have to admit to our students that the statistics they’ve been finding in their homework problems aren’t really the answer; they are only an answer. They can believe that. Then we tell them that those answers may be random, but they aren’t haphazard. In particular, if we gather the answers for all possible samples we can model them. They might accept that even though they can’t see why it should be true. Then we claim to be able to estimate the parameters of those models and propose to use them for inference. Then, to top it all off, we admit that we were lying when we said the model for the mean was Normal, and that when the standard deviation is estimated (that is, almost always) or we’re doing a regression, the model isn’t Normal at all but only similar to the Normal.

Many students find all those results uncomfortable. They are not used to thinking that way. The Red Queen encountered by Alice may have been able to believe six impossible things before breakfast, but it is challenging to ask that of our students.

With computer technology, we can spread out these results across the first several weeks of the course to make it easier for students to understand and accept them. And then, by introducing bootstrap methods, we can carry these ideas into the discussion of inference.

I will discuss a syllabus that does just that and demonstrate some free software that supports the approach.

*
October 5, 4:15 pm
*Speaker

This speaker's visit is part of the Dean's Speaker Series in Statistics and Data Science.

*Abstract*: Over the many years of reading random matrix papers, it has become
increasingly clear that the phenomena of random matrix theory can be difficult
to understand in the absence of numerical codes to illustrate the phenomena.
(We wish we could require that all random matrix papers that lend themselves
to computing include a numerical simulation with publicly available code.)
Of course mathematics exists without numerical experiments, and all too often a numerical experiment can be seen as an unnecessary bother. On a number of occasions, however, the numerical simulations themselves have an interesting
twist of their own. This talk will illustrate a few of those simulations and illustrate why in particular the Julia computing language is just perfect for these simulations.
Some topics we may discuss:

- “Free” Dice
- Tracy Widom
- Smallest Singular Value
- Jacobians of Matrix Factorizations

(joint work with Bernie Wang)

*
October 19, 4:15 pm
*Speaker

*Abstract*: I will discuss three different topics at the intersection of Analysis and Number Theory:

- improved versions of classical inequalities for functions on the Torus whose proof requires Number Theory,
- mysterious interactions between the Hardy-Littlewood maximal function and transcendental number theory (I have a proof but I still don't understand what's going on) and
- a complete mystery in an old integer sequence of Stanislaw Ulam ($300 prize for an explanation).

*
November 16, 4:15 pm
*Speaker

*Abstract*: Fraïssé theory is a method of classical Model Theory of producing canonical

limits of certain families of finite structures. For example, the random graph is the Fraïssé limit of the family of finite graphs. It turns out that this method can dualized, with

the dualization producing projective Fraïssé theory, and applied to the study of compact

metric spaces. The pseudoarc is a remarkable compact connected space; it is the generic, in

a precise sense, compact connected subset of the plane or the Hilbert cube. I will explain

the connection between the pseudoarc and projective Fraïssé limits.

*
———
November 30, 4:15 pm
*Speaker

*Abstract*: The density matrices are positively semi-definite Hermitian
matrices of unit trace that describe the state of a quantum system.
Quantum state tomography (QST) refers to the estimation of an unknown
density matrix through specifically designed measurements on identically
prepared copies of quantum systems. The dimension of the associated
density matrix grows exponentially with the size of quantum system.
This talk is on the efficient QST when the underlying density matrix
possesses structural constraints.

The first part is on the low rank structure, which has been popular in the community of quantum physicists. We develop minimax lower bounds on error rates of estimation of low rank density matrices, and introduce several estimators showing that these minimax lower bounds can be attained up to logarithmic terms. These bounds are established over all the Schatten norms and quantum Kullback-Leibler divergence. This is based on a series of work with Vladimir Koltchinskii.

The second part is built upon decomposable graphical models for quantum
multi-qubits system. The goal is to reduce the sample complexity required
for quantum state tomography, one of the central obstacles in large scale
quantum computing and quantum communication. By considering the decomposable
graphical models, we show that the sample complexity is allowed to grow
linearly with the system size and exponentially with only the maximum
clique size. This is based on a joint work with Ming Yuan.

*
—–
December 4, 4:40 pm
*Speaker

*Abstract*: We consider here a large-scale social network with a continuous response observed for each node at equally spaced time points. The responses from different nodes constitute an ultra-high dimensional vector, whose time series dynamic is to be investigated. In addition, the network structure is also taken into consideration, for which we propose a network vector autoregressive (NAR) model. The NAR model assumes each node’s response at a given time point as a linear combination of (a) its previous value, (b) the average of its connected neighbors, © a set of node-specific covariates, and (d) an independent noise. The corresponding coefficients are referred to as the momentum effect, the network effect, and the nodal effect respectively. Conditions for strict stationarity of the NAR models are obtained. In order to estimate the NAR model, an ordinary least squares type estimator is developed, and its asymptotic properties are investigated. We further illustrate the usefulness of the NAR model through a number of interesting potential applications. Simulation studies and an empirical example are presented.

*
—–
December 7, 4:15 pm
*Speaker

*Abstract*: Modern data sets are often decentralized; they are generated and stored in
multiple sources across which the communication is constrained by bandwidth or privacy.
Besides, the data quality often suffers from incompletion. This talk focuses on estimation
of principal eigenspaces of covariance matrices when data are decentralized and incomplete.
We first introduce and analyze a distributed algorithm that aggregates multiple principal
eigenspaces through averaging the corresponding projection matrices. When the number of
data splits is not large, this algorithm is shown to achieve the same statistical efficiency
as the full-sample oracle. We then consider the presence of missing values.
We show that the minimax optimal rate of estimating the principal eigenspace has a phase
transition with respect to the observation probability, and this rate can be achieved by
the principal eigenspace of an entry-wise weighted covariance matrix.

*
—–
==== Spring 2017 ====
February 23, 4:30 pm
*Speaker

Branching random walks, Gaussian free fields, cover times and random matrices.

*Abstract*: TBA

*
March 16, 4:30 pm
*Speaker

*Abstract*: A simple symmetric random walk jumps up or down with equal probability. What happens if its jump probabilities are instead taken themselves to be random in space and time (e.g. uniformly distributed on 0% to 100%)? In this talk (based on joint work with Guillaume Barraquand) I will describe the effect of this random environment on a random walk, and elucidate a new connection to the world of quantum integrable systems and the Kardar-Parisi-Zhang universality class and stochastic PDE. No prior knowledge of any of these areas will be expected.

*
May 4, 4:30 pm, Dean's Lecture
*Speaker

*Abstract*: Sharp geometric and functional inequalities play an important role in applications to geometry and PDEs. In this talk, we will discuss some important geometric inequalities such as Sobolev inequalities, Hardy inequalities, Hardy-Sobolev inequalities Trudinger-Moser and Adams inequalities, Gagliardo-Nirenberg inequalities and Caffarelli-Kohn-Nirenberg inequalities, etc. We will also brief talk about their applications in geometry and nonlinear PDEs. Some recent results will also be reported.

This talk is intended to be for the general audience.

*
———
Fall 2016
December 13, 4:40 pm*Speaker

*Abstract*:The Birch and Swinnerton-Dyer conjecture, one of the millenium problems, is a bridge between algebraic invariants of an elliptic curve and its (complex analytic) L-function. In the case of low ranks, we prove this conjecture up to the finitely many bad primes and the prime 2, by proving the Iwasawa main conjecture in full generality. The ideas in the proof and formulation also lead us to new and mysterious phenomena. This talk assumes no specialized background in number theory.

*
December 12, 4:40 pm
*Speaker

*Abstract*: Let S be a hyperbolic surface. We will give a history of counting results for geodesics on S. In particular, we will give estimates that fill the gap between the classical results of Margulis and the more recent results of Mirzakhani. We will then give some applications of these results to the geometry of curves. In the process we highlight how combinatorial properties of curves, such as self-intersection number, influence their geometry.

*
December 12, 3:00 pm
*Speaker

*Abstract*: Suppose that M is a closed connected manifold of dimension at least three, and that f is a continuous map from M to itself. Can f be deformed to a map without fixed points? When f is the identity, the Euler characteristic chi(M) is the complete obstruction, meaning the fixed points can be removed precisely when chi(M) = 0. When f is not the identity, the complete obstruction is instead a more sophisticated invariant called the Reidemeister trace of f.

In this talk we will consider the Reidemeister trace, not just for manifolds, but for a very general class of cell complexes. At this level of generality, it becomes highly nontrivial to relate the Reidemeister trace back to the Euler characteristic, but such a relationship would have far-reaching consequences. We will give a precise conjecture about the two, generalizing an earlier conjecture of Geoghegan. Finally, we will outline two recent results that provide new evidence for this conjecture.

*
December 9, 4:30 pm
*Speaker

*Abstract*: Group theory is the study of symmetry and group representation theory is the study of a group via its actions on various vector spaces over the field of comp
lex numbers. Group representations carry lots of information about the groups and so group representation theory has numerous applications in other areas of
mathematics such as probability, cryptography, number theory as well as in chemistry and physics. Group representation theory, founded by F.G. Frobenius $1
20$ years ago, is still an active research area with many interesting and long-standing open conjectures.
Simple groups, a concept introduced by Galois in 1832, are the building blocks of all finite groups. With the completion of the classification of finite simp
le groups, many important conjectures in group representation theory have been solved or reduced to simple groups.

Bertram Huppert conjectured in $2000$ that all nonabelian simple groups are determined by the set of the degrees of their complex irreducible representations
up to an abelian direct factor. This conjecture is the best possible and is related to the famous Brauer's Problem $2$ which asked the following: which fini
te groups are uniquely determined up to isomorphism by the structure of their group algebras? In this talk, I will review some basic concepts in group representation theory, discuss some interesting results on Brauer's Problem $2$ and will outline the proof of Huppert's conjecture for alternating groups, one of
the most important family of simple groups. This is joint work with Christine Bessenrodt and Jiping Zhang.

*
December 8, 4:30 pm
*Speaker

*Abstract*: I will describe the role played by an Apollonian circle packing in the scaling limit of the abelian sandpile on the square grid Z^2. The sandpile solves a certain integer optimization problem. Associated to each circle in the packing is a locally optimal solution to that problem. Each locally optimal solution can be described by an infinite periodic pattern of sand, and the patterns associated to any four mutually tangent circles obey an analogue of the Descartes Circle Theorem. Joint work with Wesley Pegden and Charles Smart.

*
December 7, 4:40 pm
*Speaker

*Abstract*: Stochastic heat equation (SHE) with multiplicative noise is an important model. When the diffusion coefficient is linear, this model is also called the parabolic Anderson model, the solution of which traditionally gives the Hopf-Cole solution to the famous KPZ equation. Obtaining various fine properties of its solution will certainly deepen our understanding of these important models. In this talk, I will highlight several interesting properties of SHE and then focus on the probability densities of the solution. In a recent joint work with Y. Hu and D. Nualart, we establish a necessary and sufficient condition for the existence and regularity of the density of the solution to SHE with measure-valued initial conditions. Under a mild cone condition for the diffusion coefficient, we establish the smooth joint density at multiple points. The tool we use is Malliavin
calculus. The main ingredient is to prove that the solutions to a related stochastic partial differential equation have negative moments of all orders.

*
December 6, 4:40 pm
*Speaker

*Abstract*: A dynamical system, such as a flow or transformation, is called hyperbolic if it tends to stretch and contract the underlying space in different directions. This behavior often appears in systems that are sufficiently mixing – think of the striations that appear when stirring a drop of milk into coffee – and it lends these systems a kind of rigidity that can be useful in understanding their long-term behavior.

We will look at a number of hyperbolic dynamical systems, including Anosov and pseudo-Anosov maps and transformations, and illustrate some of the uses and consequences of their hyperbolic behavior. In addition, we will see that the dynamics of a hyperbolic system can often be understood in terms of a simpler, lower-dimensional dynamical system that lies “at infinity,” the universal circle. This plays an important role in the proof of Calegari’s conjecture, which relates the dynamical hyperbolicity of a flow with the geometric hyperbolicity of its underlying space: It says that any flow on a closed hyperbolic 3-manifold whose orbits are coarsely comparable to geodesics is equivalent, on the large scale, to a pseudo-Anosov flow.

*
December 5, 4:40 pm
*Speaker

*Abstract*: In this talk I present our recent works, jointly with D.Knopf and I.M.Sigal, on singularity formation under mean curvature flow. By very different techniques, we proved the uniqueness of collapsing cylinder for a generic class of initial surfaces. In the talk some key new elements will be discussed. A few problems, which might be tackled by our techniques, will be formulated.

*
December 2, 4:40 pm
*Speaker

*Abstract*: Mean curvature flow may be regarded as a geometric version of the heat equation. However, in contrast to the classical heat equation, mean curvature flow is described by a quasilinear evolution system of partial differential equations, and in general the solution only exists on a finite time interval. Therefore, it's very typical that the flow develops singularities.

Translating solitons arise as parabolic rescaling of type II singularities. In this talk, we shall outline a program on the classification of translating solitons. We shall also report on some recent progress we have made in the joint work with Joel Spruck.

*
December 1, 4:30 pm
*Speaker

*Abstract*: The study of randomness of fixed objects is an area of active
research with many exciting developments in the last few years. We will
discuss recent results about sequences in the unit interval specializing to
directions in affine lattices, \sqrt n modulo 1, and directions in hyperbolic lattices.
Theorems about these sequences address convergence of moments as well as rates of convergence, and their proofs
showcase a beautiful interplay between dynamical systems and number theory.

*
November 17, 4:30 pm
*Speaker

*Abstract*: Dynamical systems studies the long-term behavior of systems that evolve in time. It is well known that given an initial state the future behavior of a system is unpredictable, even impossible to describe in many cases. The entropy of a system is a number that quantifies the complexity of the system. In studying entropy, the nicest classes of smooth systems are ones that do not undergo bifurcations for small perturbations. In this case, the entropy remains constant under perturbation. Outside of the class of systems, a perturbation of the original system may undergo bifurcations. However, this is a local phenomenon, and it is unclear when and how the local changes in the system lead to global changes in the complexity of the system. We will state recent results describing how the entropy (complexity) of the system may change under perturbation for certain classes of systems.

*
November 3, 4:30 pm
*Speaker

*Abstract:* This talk is about an unexpected connection between cryptography and the theory of electrostatics. RSA cryptography is based on the presumed difficulty of factoring a given large integer N. In the 1990's, Coppersmith showed how one could quickly determine whether there is a factor of N which is within N^{1/4} of a given number. Capacity theory originated in studying how charged particles distribute themselves on an object. I will discuss how an arithmetic form of capacity theory can be used to show that one cannot increase the exponent 1/4 in Coppersmith's method. This is joint work with Brett Hemenway, Nadia Heninger and Zach Scherr.

seminars/colloquium.txt · Last modified: 2019/10/17 22:51 by mazur

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported