Activities
Student Organizations
Math Club
BingAWM
Actuarial Association
Unless stated otherwise, colloquia are scheduled for Thursdays 4:30-5:30pm in WH-100E with refreshments served from 4:00-4:25 pm in WH-102.
Organizers: Thomas Farrell and Andrey Gogolev
Spring 2015
Abstract:
Some consider smooth 4-manifolds to be a mature field, which
typically means its approachable yet nontrivial problems have become
scarce. This is mainly due to a lack of tools. In this talk I will
present a new way to depict any smooth, closed oriented 4-manifold
that opens the doors to two of the most successful tools from
3-manifolds: pseudoholomorphic curves and discrete groups.
Abstract:
For geometric nonlinear PDEs, where no easy superposition principle holds, examples of (global, geometrically/topologically interesting) solutions can be hard to come about. In certain situations, for example for 2-surfaces satisfying an equation of mean curvature type, one can generally “fuse” two or more such surfaces satisfying the PDE, as long as certain global obstructions are respected - at the cost (or benefit) of increasing the genus significantly. The key to success in such a gluing procedure is to understand the obstructions from a more local perspective, and to allow sufficiently large geometric deformations to take place. In the talk I will introduce some of the basic ideas and techniques (and pictures) in the gluing of minimal 2-surfaces in a 3-manifold. Then I will explain two recent applications, one to the study of
solitons with genus in the singularity theory for mean curvature flow (rigorous construction of Ilmanen's conjectured “planosphere” self-shrinkers), and another to the non-compactness of moduli spaces of finite total curvature minimal surfaces (a problem posed by Ros & Hoffman-Meeks). Some of this work is joint w/ Steve Kleene and/or Nicos Kapouleas.
Abstract:
The Śulbasūtras, generally dated to 800-200 BCE, are a group of texts that provide the mathematical methods necessary for carrying out various rituals of ancient India. The texts do not seek to convince the reader that a particular formula is correct, but rather focus on providing the reader with working methods. As such, it is often not clear exactly how the authors of the texts arrived at their mathematical results. We will explore some of the mathematical statements of the texts and modern attempts at reconstructing the rationale behind them.
Abstract: The level of difficulty at which students are assessed on exams can vary considerably across course sections, raising issues for both academic equity and student proficiency. This may arise when instructors vary from one another in the level of difficulty at which they intend to test. Even when instructors intend to test at the same level, however, judgments regarding what constitutes a “basic” or “difficult” question can vary widely.
This discussion will present a framework that enables instructors to rate problem difficulty in an objective manner. Used a priori, it facilitates improved control over exam design, helping instructors make purposeful and accurate choices about the difficulty profile they wish to construct. Used post hoc, it provides insight into the factors driving the exam outcomes, and by implication, student learning. The tool has validated empirically over several years use in introductory level Physics and Calculus courses.
This discussion may be of interest to:
Abstract: Although constituting a vast extension of ancient Spherical Geometry, the beautiful class of positively curved (Riemannian) spaces is like the “Tip of the Iceberg” among all (Riemannian) spaces. Accordingly, non-symmetric positively curved spaces are known only in a few sporadic dimensions, and yet only a few obstructions to their existence are known.
In this talk, we will describe the current state of affair of the subject including tools and methods, with emphasis on the impact symmetries have had on the development during the last few decades.
Abstract:
We consider a hyperbolic diffeomorphism f of a manifold M.
A linear cocycle over f is an automorphism of a vector bundle
over M that projects to f. An important example comes from
the differential of f or its restriction to an invariant sub-bundle
of the tangent bundle. For a trivial bundle, a linear cocycle can
be viewed as a GL(d,R)-valued function on the manifold.
We discuss what conclusions can be made about cocycles
based on their behavior at the periodic points of f.
In particular, we consider the questions when two cocycles
are cohomologous and when a cocycle is conformal or isometric.
Fall 2014
Abstract:
In this talk I will give the classification of 5-manifolds with fundamental group Z and whose second homotopy group is finitely generated abelian group. As an application we obtain a criterion for 5-manifolds with fundamental group Z being a fiber bundle over the circle. The classification is also applied to classify certain knotted 3-spheres in the 5-sphere. This is a joint work with M. Kreck.
Abstract:
While the majority of gene histories found in a clade of organisms are expected to be generated by a common process (e.g. the coalescent process), it is well-known that numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history quite distinct from those of the majority of genes. Such “outlying” gene trees are considered to be biologically interesting and identifying these genes has become an important problem in phylogenetics.
In this talk we propose a nonparametric method of estimating distributions of phylogenetic trees, with the goal of identifying trees which are significantly different from the rest of the trees in the sample. Our method compares favorably with a similar recently-published method, featuring an improvement of one polynomial order of computational complexity (to quadratic in the number of trees analyzed), with simulation studies suggesting only a small penalty to classification accuracy. Application of our implemented software KDETrees to a set of Apicomplexa genes identified several unreliable sequence alignments which had escaped previous detection, as well as a gene independently reported as a possible case of horizontal gene transfer.
This is joint work with G. Weyenberg, P. Huggins, C. Schardl, and D. Howe.
Abstract:
Non-Archimedean analytic geometry, as developed by Berkovich, is a variation of classical complex analytic geometry for non-Archimedean fields such as p-adic numbers. Solutions to a system of polynomial equations over these fields form a totally disconnected space in their natural topology. The process of analytification adds just enough points to make them locally connected and Hausdorff. The resulting spaces are technically difficult to study but, notably, their heart is combinatorial: they can be examined through the lens of tropical and polyhedral geometry.
I will illustrate this powerful philosophy through complete examples, including elliptic curves, the tropical Grassmannian of planes of Speyer-Sturmfels, and a compactification of the well-known space of phylogenetic trees of Billera-Holmes-Vogtmann.
This talk is based on joint works with M. Haebich, H. Markwig and A. Werner.
Abstract:
Discrete time Markov chains are often used in statistical models to fit the observed data from a random physical process. Sometimes, in order to simplify the model, it is convenient to consider time-homogeneous Markov chains, where the transition probabilities do not depend on the time $T$. While under the time-homogeneous Markov chain model it is assumed that the row sums of the transition probabilities are equal to one, under the toric homogeneous Markov chain (THMC) model the parameters are free and the row sums of the transition probabilities are not restricted.
In order for a statistical model to reflect the observed data, a goodness-of-fit test is applied. For instance, for the time-homogeneous Markov chain model, it is necessary to test if the assumption of time-homogeneity fits the observed data. In 1998, Diaconis-Sturmfels developed a Markov Chain Monte Carlo method (MCMC) for goodness-of-fit test by using Markov bases. A Markov basis is a set of moves between elements in the conditional sample space with the same sufficient statistics so that the transition graph for the MCMC is guaranteed to be connected for any observed value of the sufficient statistics. In algebraic terms, a Markov basis is a generating set of a toric ideal defined as the kernel of a monomial map between two polynomial rings. In algebraic statistics, the monomial map comes from the design matrix (configuration) associated with a statistical model.
In this talk we will consider a Markov basis and a Groebner basis for the toric ideal associate with the design matrix defined by the THMC model with $S \geq 2$ states without initial parameters for any time $T \geq 3$. First we will show the upper bound of the Markov degree, the degree of a minimal Markov base, of the THMC model with $S = 3$ for $T \geq 3$. In order to compute the upper bound, we use the model polytope — the convex hull of the columns of the design matrix. Here we will show the model polytope has only 24 facets for $T \geq 5$ and a complete description of the facets for $T \geq 3$. Finally, we will show a condition when the THMC with any $S \geq 2$ states for $T \geq 3$ have a square-free quadratic Groebner basis and Markov basis. One such example is the embedded discrete Markov chain (jump chain) of the Kimura three parameter model.
This is joint work with Davis Haws (IBM), Abraham Martin del Campo (IST Austria), and Akimichi Takemura (University of Tokyo).
Abstract:
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method – named QUADRO – for analyzing high dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating nonpolynomially many parameters, even though the fourth moments are assumed. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Abstract:
A family of hypersurfaces $M_t\subset R^{n+1}$ evolves by mean curvature flow (MCF) if the velocity at each point is given by the mean curvature vector. MCF can be viewed as a geometric heat equation, deforming surfaces towards optimal ones. If the initial surface $M_0$ is convex, then the evolving surfaces $M_t$ become rounder and rounder and converge (after rescaling) to the standard sphere $S^n$. The central task in studying MCF for more general initial surfaces is to analyze the formation of singularities. For example, if $M_0$ looks like a a dumbbell, then the neck will pinch off preventing one from continuing the flow in a smooth way. To resolve this issue, one can either try to continue the flow as a generalized weak solution or try to perform surgery (i.e. cut along necks and replace them by caps). These ideas have been implemented in the last 15 years in the deep work of White and Huisken-Sinestrari, and recently Kleiner and I found a streamlined and unified approach (arXiv: 1304.0926, 1404.2332). In this lecture, I will survey these developments for a general audience.
Abstract:
There is a deep interplay between the combinatorics (matroid), algebra (cohomology or rational model), and geometry (complement) of a subspace arrangement (finite collection of subspaces in a vector space). For example if the subspaces are complex and complex codimension 1 (hyperplanes) then the Betti numbers are exactly the (unsigned) Whitney numbers of the first kind on the intersection lattice. Subspace arrangements of the braid arrangement can be enumerated by partitions. It turns out that the Whitney numbers of these subspace arrangements can be found by looking at a generalized chromatic polynomial of the associated partitions. Unfortunately, these Whitney numbers do not give the Betti numbers of the complement and finding a closed formula for these Betti numbers is not known. However, using tools from rational homotopy theory we can show that certain classes of these arrangements are rationally formal and non-formal. At the end we will construct a new differential graded algebra which presents a kind of model for the collection of all k-equal arrangements (configuration spaces where k-1 points can collide)
which gives hints at a nice presentation for the cohomology and the Betti numbers.
Abstract:
Data subject to heavy-tailed errors are commonly encountered in various scientific fields, especially in
the modern era with explosion of massive data. To address this problem, procedures based on quantile regression
and Least Absolute Deviation (LAD) regression have been developed in recent years. These methods essentially
estimate the conditional median (or quantile) function. They can be very different from the conditional mean
functions when distributions are asymmetric and heteroscedastic. How can we efficiently estimate the mean
regression functions in ultra-high dimensional setting with existence of only the second moment? To solve this
problem, we propose a penalized Huber loss with diverging parameter to reduce biases created by the traditional
Huber loss. Such a penalized robust approximate quadratic (RA-quadratic) loss will be called RA-Lasso. In the
ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, our results
reveal that the RA-lasso estimator produces a consistent estimator at the same rate as the optimal rate under the
light-tail situation. We further study the computational convergence of RA-Lasso and show that the composite
gradient descent algorithm indeed produces a solution that admits the same optimal rate after sufficient
iterations. As a byproduct, we also establish the concentration inequality for estimating population mean when
there exists only the second moment. We compare RA-Lasso with other regularized robust estimators based on
quantile regression and LAD regression. Extensive simulation studies demonstrate the satisfactory finite-sample
performance of RA-Lasso.
Abstract:
Dynamic networks are used in a variety of fields to represent the structure and evolution of the relationships between entities. We present a model which embeds longitudinal network data as trajectories in a latent Euclidean space. A Markov chain Monte Carlo algorithm is proposed to estimate the model parameters and latent positions of the actors in the network. The model yields meaningful visualization of dynamic networks, giving the researcher insight into the evolution and the structure, both local and global, of the network. The model handles directed or undirected edges, easily handles missing edges, and lends itself well to predicting future edges. Further, a novel approach is given to detect and visualize an attracting influence between actors using only the edge information. We use the case-control likelihood approximation to speed up the estimation algorithm, modifying it slightly to account for missing data. We apply the latent space model to data collected from a Dutch classroom, and cosponsorship network collected on members of the U.S. House of Representatives, illustrating the usefulness of the model by making insights into the networks.
Abstract:
Statistical inference on infinite-dimensional parameters in Bayesian framework is investigated. The main contribution of our work is to demonstrate that nonparametric Bernstein-von Mises theorem can be established in a very general class of nonparametric regression models under the novel tuning priors. Surprisingly, this type of prior connects two important classes of statistical methods: nonparametric Bayes and smoothing spline at a fundamental level. The association with smoothing spline facilitates both theoretical analysis and applications for nonparametric Bayesian inference. For example, the selection of a proper tuning prior can be easily done through generalized cross validation, which can be well implemented by existing R packages.
This is a joint work with Guang Cheng (Purdue).
Abstract:
Geometric group theory is partly the study of groups arising naturally in geometry and topology: this includes fundamental groups of interesting spaces (e.g. 3-manifold groups) or groups of symmetries (isometries, homeomorphisms, etc.) of interesting spaces (e.g. mapping class groups). Geometric group theory is also the study of groups as geometric objects in their own right.
This talk deals with three viewpoints from which a group can be analyzed, and the interplay between these. First, the source of many questions in geometric group theory is the topological viewpoint, in which spaces are distinguished up to homeomorphism, homotopy equivalence, etc. Once one has isolated the fundamental group of one's space, and found generators, the natural geometry becomes “coarse”, and things are generally true up to a relation called “quasi-isometry”. Often, it is desirable to realize the group as a group of automorphisms of some very specific, rigid combinatorial structure; this is the third viewpoint. I will discuss examples of how each of these approaches can naturally lead to and interact with the others.
I will conclude with a brief discussion of very recent joint work with J. Behrstock and A. Sisto, in which we define a class of spaces that includes mapping class groups, many cubical groups (which will have been defined), and most 3-manifold groups, and build tools to study the coarse geometry of such spaces from a common perspective.
Abstract:
In this talk I show that it is always possible to find $n$ points in the $d$-dimensional faces of a $nd$-dimensional convex polytope $P$ so that their center of mass is a target point in $P$. Equivalently, the $n$-fold Minkowski sum of the polytope's $d$-skeleton is the polytope scaled by $n$. This verifies a conjecture by Takeshi Tokuyama.
Abstract: For over a century, Einstein metrics have remained of core interest in modern geometry. On a homogeneous space the Einstein condition reduces to a collection of polynomials and so, in principal, such spaces should be easy to understand and classify. However, the reality is much more complicated and no classification exists in either the compact or non-compact settings. In this talk, we present the current state of knowledge on the classification of non-compact, homogeneous Einstein spaces.