
<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.7.2-ppt DokuWiki" -->
<?xml-stylesheet href="https://www2.math.binghamton.edu/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Department of Mathematics and Statistics, Binghamton University seminars:sml</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/"/>
    <id>https://www2.math.binghamton.edu/</id>
    <updated>2026-04-11T12:49:39-04:00</updated>
    <generator>FeedCreator 1.7.2-ppt DokuWiki</generator>
<link rel="self" type="application/atom+xml" href="https://www2.math.binghamton.edu/feed.php" />
    <entry>
        <title>September 9, 2014</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/140909"/>
        <published>2017-06-12T11:43:59-04:00</published>
        <updated>2017-06-12T11:43:59-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/140909</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, September 9, 2014&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: OW-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Ganggang Xu (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Asymptotic optimality and efficient computation of the leave-subject-out cross-validation&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>September 23, 2014</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/140923"/>
        <published>2017-06-12T11:45:00-04:00</published>
        <updated>2017-06-12T11:45:00-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/140923</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, September 23, 2014&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: OW-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wei Sun (Purdue University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Causal Inference Framework for Complex Advertising Effect Measurement&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
As the online advertising industry has evolved into an age of diverse ad formats and delivery channels, users are exposed to complex ad treatments involving various ad characteristics. The diversity and generality of ad treatments call for accurate and causal measurement of ad effectiveness. In this talk, I will present a new causal inference framework for assessing the impact of general advertising treatments. It enables analysis on multi-dimensional ad treatments, where each ad treatment could be discrete or continuous. Our approach is unbiased and computationally efficient by employing a tree structure that specifies the relationship between user characteristics and the corresponding ad treatment. This framework is robust to model misspecification and highly flexible with minimal manual tuning. The efficacy of our approach is demonstrated in several advertising campaigns:
&lt;/p&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Investigate best frequency of ads in the campaign. We show that the ad frequency usually has a treatment effect cap and is often over-estimated by naive estimation. This investigation suggests adjusting the cap of the ad frequency to avoid wasting of ad inventory.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Investigate best ads size on different mobile devices. Our model suggests that the ad size 300*250 produces consistently high success rates across various mobile devices.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Bio: Wei Sun is a graduate student from Department of Statistics at Purdue University. His research interests expand various aspects of statistical machine learning and high dimensional data analysis. He has published in Journal of Machine Learning Research and Electronic Journal of Statistics. Wei Sun received an Institute of Mathematical Statistics Travel Award in 2013.
&lt;/p&gt;
&lt;!-- EDIT7 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center plugin_wrap&quot; style=&quot;width:40%;&quot;&gt;&lt;div class=&quot;table sectionedit9&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;th class=&quot;col0 centeralign&quot; colspan=&quot;2&quot;&gt;  Itinerary  &lt;/th&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  9:30 - 10:00&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Xingye Qiao &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  10:00 - 10:30&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Graduate student(s) &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  10:30 - 11:00&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Qiqing Yu &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  11:00 - 11:30&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Ganggang Xu &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  12:00 - 1:00&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Talk &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row6&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  1:00 - 2:00&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Lunch with graduate students &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row7&quot;&gt;
		&lt;td class=&quot;col0 rightalign&quot;&gt;  2:00&lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Departure &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT9 TABLE [2151-2392] --&gt;&lt;/div&gt;&lt;!-- EDIT8 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 18, 2014</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/141118"/>
        <published>2014-11-06T22:39:43-04:00</published>
        <updated>2014-11-06T22:39:43-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/141118</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 18, 2014&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: OW-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Ming Yang (Computer Science)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Multi-task Learning with Gaussian Matrix Generalized Inverse Gaussian Model&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
In this talk, we first introduce the multi-task learning problem and the application of Matrix Generalized Inverse Gaussian (MGIG) distribution to the problem. We still propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>December 2, 2014</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/141202"/>
        <published>2014-11-05T23:20:29-04:00</published>
        <updated>2014-11-05T23:20:29-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/141202</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, December 2, 2014&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: OW-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Yingming Li (Computer Science)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Scientific Articles Recommendation&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Advances in social network sites enable researchers to access to large on-line archives of scientific articles. These social network sites allow researchers to create their personal libraries for the on-line articles that interest them and to share the libraries with other researchers. This makes recommender systems helpful for researchers to find interesting articles. In this paper, we propose topic regression matrix factorization models to recommend scientific articles for on-line community. The main idea of topic regression Matrix Factorization models lies in extending the matrix factorization with a probabilistic topic modeling. Further, we demonstrate the efficacy of topic regression Matrix Factorization models on a large subset of the data from CiteULike, a bibliography sharing service dataset. 
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>September 29, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/150929"/>
        <published>2015-09-11T14:55:45-04:00</published>
        <updated>2015-09-11T14:55:45-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/150929</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, September 29, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Xu Chu (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Lasso, AdaLasso and Sure Independence Screening&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the first of a series of three talks which focus on variable selection for high-dimensional data. I will introduce the following three papers on variable selection.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B, 58, 267–288.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level2&quot;&gt;&lt;div class=&quot;li&quot;&gt; Zou, H. (2006) The adaptive lasso and its oracle properties. J. Am. Statist. Ass., 101, 1418–1429.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level2&quot;&gt;&lt;div class=&quot;li&quot;&gt; Fan, J. and Lv, J. (2008) Sure independence screening for ultrahigh dimensional feature space (with discussion). J. R. Statist. Soc. B,70, 849–911.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 06, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151006"/>
        <published>2015-09-11T14:57:22-04:00</published>
        <updated>2015-09-11T14:57:22-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151006</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 06, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wenbo Wang (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Selection consistency of lasso and Stability Selection&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the second of a series of three talks which focus on variable selection for high-dimensional data. I will introduce the following three papers on variable selection, especially on the selection consistency.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Meinshausen, N. and Bühlmann, P. (2006) High dimensional graphs and variable selection with the lasso. Ann. Statist., 34, 1436–1462.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level2&quot;&gt;&lt;div class=&quot;li&quot;&gt; Zhao, P. and Yu, B. (2006) On model selection consistency of lasso. J. Mach. Learn. Res., 7, 2541–2563. &lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level2&quot;&gt;&lt;div class=&quot;li&quot;&gt; Meinshausen, N., &amp;amp; Bühlmann, P. (2010). Stability selection. J. R. Statist. Soc. B, 72(4), 417-473.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 13, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151013"/>
        <published>2015-09-11T15:00:55-04:00</published>
        <updated>2015-09-11T15:00:55-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151013</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 13, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wenming Deng (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: The Bolasso, the Dantzig selector and more on Stability Selection&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the third and final talk of a series of three talks which focus on variable selection for high-dimensional data. I will introduce the two related work below, and then discuss more on Meinshausen &amp;amp; Bühlmann (2010) following the talk last week.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Bach, F. (2008) Bolasso: model consistent Lasso estimation through the bootstrap. In Proc. 25th Int. Conf. Machine Learning, pp.33–40. New York: Association for Computing Machinery.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level2&quot;&gt;&lt;div class=&quot;li&quot;&gt; Candes, E. and Tao, T. (2007) The Dantzig selector: statistical estimation when p is much larger than n. Ann. Statist., 35, 2312–2351.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 20, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151020"/>
        <published>2015-10-07T22:21:49-04:00</published>
        <updated>2015-10-07T22:21:49-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151020</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 20, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Xin Qi (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: On False Discovery Rate (I)&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the first of three talks on false discovery rate on the topic of multiple testing. We will discuss the seminal papers, Benjamini and Hochberg (1995) and Storey (2002).
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society, Series B, 57, 289-300.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Storey, J.D. (2002). A Direct Approach to False Discovery Rates. Journal of the Royal Statistical Society, Series B, 64, 479-498.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 27, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151027"/>
        <published>2015-10-07T22:21:32-04:00</published>
        <updated>2015-10-07T22:21:32-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151027</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 27, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Lin Yao (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: On False Discovery Rate (II)&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the second of three talks on false discovery rate on the topic of multiple testing. We will discuss the following two papers.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Storey, J.D., Taylor, J.E. and Siegmund, D. (2004). Strong Control, Conservative Point Estimation and Simultaneous Conservative Consistency of False Discovery Rates: A Unified Approach. Journal of the Royal Statistical Society, Series B, 66, 187-205.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Fan, J., Han, X., &amp;amp; Gu, W. (2012). Estimating false discovery proportion under arbitrary covariance dependence. Journal of the American Statistical Association, 107(499), 1019-1035.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 3, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151103"/>
        <published>2015-11-01T22:57:44-04:00</published>
        <updated>2015-11-01T22:57:44-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151103</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 3, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Xiaojie Du (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Inference About the Slope in Linear Regression with Missing Responses: An Empirical Likelihood Approach.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This article considers linear regression models with responses that are allowed to be missing at random, which covers the model with fully observed data as a special case. We assume that the covariates and the errors are independent, without specifying their distributions. The main result of this article is to present two efficient maximum empirical likelihood estimators for the regression parameter, which are easy to obtain numerically. This fills a gap in the literature which does not provide a parameter estimator that is both simple and efficient: the usual efficient approaches require an estimator of the influence function, which can be quite involved. We also present an asymptotic confidence interval for the slope. 
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 10, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151110"/>
        <published>2015-09-11T15:02:56-04:00</published>
        <updated>2015-09-11T15:02:56-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151110</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 10, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Junyi Dong (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: The Family of Distributions With Proportional Hazards&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
The proportional hazards (PH) model specifies distributions for regression data. Given a covariate value, the PH model specifies a family of distributions for a random variable, called the PH family. I will talk about the method of generating pseudo random numbers under various special cases when the random variable may take infinity with probability one. I will also talk about the identifiability condition of the parameters in the PH family and the semi-parametric estimation of the parameters under the identifiability condition. 
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>December 1, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151201"/>
        <published>2015-11-02T17:07:04-04:00</published>
        <updated>2015-11-02T17:07:04-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151201</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, December 1, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Chen Liang (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: On Information Criterion&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
I will present two papers both on the related concepts of information criterion.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Chen, J., &amp;amp; Chen, Z. (2008). Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 95(3), 759-771.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Zhang, Y., Li, R., &amp;amp; Tsai, C. L. (2010). Regularization parameter selections via generalized information criterion. Journal of the American Statistical Association, 105(489), 312-323.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>December 8, 2015</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/151208"/>
        <published>2015-11-23T16:55:51-04:00</published>
        <updated>2015-11-23T16:55:51-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/151208</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, December 8, 2015&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-G02&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Baiyang Qi (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Building a model for scoring 20 or more runs in a baseball game&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
I will present a paper on sports statistics. The abstract of that paper is as follows.
&lt;/p&gt;

&lt;p&gt;
How often can we expect a Major League Baseball team to score at least 20 runs in a single game? Considered a rare event in baseball, the outcome of scoring at least 20 runs in a game has occurred 224 times during regular season games since 1901 in the American and National Leagues. Each outcome is modeled as a Poisson process; the time of occurrence of one of these events does not affect the next future occurrence. Using various distributions, probabilities of events are generated, goodness-of-fit tests are conducted, and predictions of future events are offered. The statistical package R is employed for analysis.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>Febuary 23, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160223"/>
        <published>2016-02-22T10:26:31-04:00</published>
        <updated>2016-02-22T10:26:31-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160223</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, Febuary 23, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Xin Qi (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: RE: Joint estimation of multiple graphical models&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Gaussian graphical models explore dependence relationships between random variables, through the estimation of the corresponding inverse covariance matrices. In this paper we develop an estimator for such models appropriate for data from several graphical models that share the same variables and some of the dependence structure. In this setting, estimating a single graphical model would mask the underlying heterogeneity, while estimating separate models for each category does not take advantage of the common structure. We propose a method that jointly estimates the graphical models corresponding to the different categories present in the data, aiming to preserve the common structure, while allowing for differences between the categories. This is achieved through a hierarchical penalty that targets the removal of common zeros in the inverse covariance matrices across categories. We establish the asymptotic consistency and sparsity of the proposed estimator in the high-dimensional case, and illustrate its performance on a number of simulated networks. An application to learning semantic connections between terms from webpages collected from computer science departments is included.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>March 1, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160301"/>
        <published>2016-03-07T09:54:26-04:00</published>
        <updated>2016-03-07T09:54:26-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160301</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, March 1, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wenbo Wang (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: RE: Multicategory Angle-based Large-margin Classification and Reinforced SVM&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Large-margin classifiers are popular methods for classification. Among existing simultaneous multicategory large-margin classifiers, a common approach is to learn k different functions for a k-class problem with a sum-to-zero constraint. Such a formulation can be inefficient. We propose a new multicategory angle-based large-margin classification framework. The proposed angle-based classifiers consider a simplex-based prediction rule without the sum-to-zero constraint, and enjoy more efficient computation. Many binary large-margin classifiers can be naturally generalized for multicategory problems through the angle-based framework. Theoretical and numerical studies demonstrate the usefulness of the angle-based methods.
&lt;/p&gt;

&lt;p&gt;
The Support Vector Machine (SVM) is a very popular classification tool with many successful applications. It was originally designed for binary problems with desirable theoretical properties. Although there exist various Multicategory SVM (MSVM) extensions in the literature, some challenges remain. In particular, most existing MSVMs make use of k classification functions for a k-class problem, and the corresponding optimization problems are typically handled by existing quadratic programming solvers. In this paper, we propose a new group of MSVMs, namely the Reinforced Angle-based MSVMs (RAMSVMs), using an angle-based prediction rule with k − 1 functions directly. We prove that RAMSVMs can enjoy Fisher consistency. Moreover, we show that the RAMSVM can be implemented using the very efficient coordinate descent algorithm on its dual problem. Numerical experiments demonstrate that our method is highly competitive in terms of computational speed, as well as classification prediction performance. Supplemental materials for the article are available online.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>March 8, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160308"/>
        <published>2016-03-07T09:56:31-04:00</published>
        <updated>2016-03-07T09:56:31-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160308</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, March 8, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Lin Yao (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: RE: Sparse Regression Incorporating Graphical Structure among Predictors&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
With the abundance of high dimensional data in various disciplines, sparse regularized techniques are very popular these days. In this paper, we make use of the structure information among predictors to improve sparse regression models. Typically, such structure information can be modeled by the connectivity of an undirected graph using all predictors as nodes of the graph. Most existing methods use this undirected graph edge-by-edge to encourage the regression coefficients of corresponding connected predictors to be similar. However, such methods do not directly utilize the neighborhood information of the graph. Furthermore, if there are more edges in the predictor graph, the corresponding regularization term will be more complicate. In this paper, we incorporate the graph information node-by-node, instead of edge-by-edge as used in most existing methods. Our proposed method is very general and it includes adaptive Lasso, group Lasso, and ridge regression as special cases. Both theoretical and numerical studies demonstrate the effectiveness of the proposed method for simultaneous estimation, prediction and model selection.
&lt;/p&gt;

&lt;p&gt;
&lt;a href=&quot;http://www.tandfonline.com/doi/abs/10.1080/01621459.2015.1034319&quot; class=&quot;urlextern&quot; title=&quot;http://www.tandfonline.com/doi/abs/10.1080/01621459.2015.1034319&quot;&gt;http://www.tandfonline.com/doi/abs/10.1080/01621459.2015.1034319&lt;/a&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>March 15, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160315"/>
        <published>2016-03-07T10:07:06-04:00</published>
        <updated>2016-03-07T10:07:06-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160315</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, March 15, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Junyi Dong (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Diagnostic Plotting Methods for Proportional Hazards Models  With Time-dependent Covariates or Time-varying Regression Coefficients&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This is the first one of a series of two talks, the second of which will be on Thursday.
&lt;/p&gt;

&lt;p&gt;
Given a sample of regression data from $(Y,Z)$, we propose several new diagnostic plotting methods for the Cox models with the general external time-dependent covariates $Z$, which can be continuous, or  with the time-varying regression coefficients. The main approach compares the non-parametric MLE of the survival function  of $Y$ against its expected (correct) expression under the given Cox model which can be mis-specified. This is different from the existing methods in the literatures such as log-log plots and residuals plots.
The simulation studies and the data analysis of our breast cancer data suggest that the combination of these diagnostic plotting methods we proposed performs quite satisfactorily. These new diagnostic plotting methods naturally yield various tests for checking the validity of the Cox models. The main advantage of the new tests over the residual tests is in the case that the data do not fit any PH model. Then the new tests are still valid but not the residual tests.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>March 22, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160322"/>
        <published>2016-02-28T21:26:30-04:00</published>
        <updated>2016-02-28T21:26:30-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160322</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, March 22, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Grace Wang (Syracuse University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: ConceFT: Concentration of Frequency and Time via a multitapered synchrosqueezed transform&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Time-frequency representations provide a powerful tool for the analysis of time series signals. Techniques that decompose the time-dependent signals into multiple oscillatory components, with time-varying amplitudes and instantaneous frequencies are very appealing and have been shown to be useful in a wide range of applications including geophysics, biology, medicine, finance and social dynamics. In this talk, I’ll give an introduction to time-frequency representations and review existing methods for the previously described decomposition. Then I’ll present a new method that applies the multitapering with synchrosqueezed transform. Numerical experiments as well as a theoretical analysis will be demonstrated to assess its effectiveness.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>April 19, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160412"/>
        <published>2016-04-08T12:56:34-04:00</published>
        <updated>2016-04-08T12:56:34-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160412</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, April 12, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Ruiqi Liu (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: TBA&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Consider that we are observing iid copies $(X_i, Y_i)_{i=1}^n$ from random vector $(X, Y)$.  According to some historical information, the marginal distributions of $X$ and $Y$ are known, but the joint distribution is unclear. A problem of interest is to estimate $\exp[h(X,Y)]$ for some measurable function $h$. This is of application value. For example, in insurance industry,  some life insurance policies will cover both husband and wife . Let $X,Y$ be the left life time of husband and wife after signing the policy and $X, Y$ are usually dependent. The company is able to obtain the marginal distributions of $X$ and $Y$ from historical records. Often, the values of interest are $\min(X, Y)$, $\max(X, Y)$ or their distributions. This paper provides an empirical likelihood estimator to solve this problem. Some nice properties of our estimator are supported by theoretical analysis and simulation results.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>April 19, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160419"/>
        <published>2016-02-22T12:58:14-04:00</published>
        <updated>2016-02-22T12:58:14-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160419</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, April 19, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Rachael Kline (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: RE: Joint Estimation of Multiple Precision Matrices with Common Structures&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained L1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>April 26, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160426"/>
        <published>2016-04-20T22:11:00-04:00</published>
        <updated>2016-04-20T22:11:00-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160426</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, April 26, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wolfgang Wefelmeyer (Universität zu Köln)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Density estimators in regression models with errors in covariates&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
In regression models $Y=r(X)+\varepsilon$
with $X$ and $\varepsilon$ independent, the density
of the response $Y$ can be estimated by a convolution of (kernel)
estimators for the densities of $r(X)$ and $\varepsilon$.
The rate of this convolution estimator depends on the smoothness
of the densities of $X$ and $\varepsilon$ and on the smoothness
and local flatness of the regression function $r$.
When we observe the covariates $X$ with measurement errors,
$Z=X+\eta$, we need deconvolution estimators for the densities of
$X$ and $\varepsilon$ and for $r$.
This is joint work with Anton Schick and Ursula U. Müller.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>May 10, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160510"/>
        <published>2016-05-09T15:06:51-04:00</published>
        <updated>2016-05-09T15:06:51-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160510</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, May 10, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-2:00p&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Qinggang Diao (Mathematical Sciences)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Cox proportional hazards model with time-dependent covariates&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
This PhD dissertation is divided into four chapters, where right-censored (RC) data and interval- censored (IC) data under several different types of time dependent covariates assumptions will be discussed.
&lt;/p&gt;

&lt;p&gt;
In Chapter 0, we will introduce some basic concepts and notations about survival analysis.
&lt;/p&gt;

&lt;p&gt;
Chapter 1 reproduces the paper of Yu et al.(2015). In this chapter piecewise Cox models with right-censored data will be discussed. Piecewise Cox models are regression models that follow dif- ferent Cox models when restricted to different time intervals. We study a general class of piecewise Cox models that involve a single cut point so that there are two separate Cox models correspond- ing to the two time intervals created. We discuss the computation of the semi-parametric maximum likelihood estimates (SMLE) of the parameters, with right-censored data, and a simplified algorithm for the maximum partial likelihood estimates (MPLE). Simulation studies suggest that MPLE com- pares favorably with its SMLE counterpart, even though the SMLE is more efficient. To assess the appropriateness of the model assumption, we propose a simple diagnostic plotting method. This method will enable us to determine an appropriate cut point. We show that the results for the case of a single cut point can be extended to involving more than one cut point. Finally, we apply the methodology we have developed for piecewise Cox models to the survival analysis of a long-term breast cancer follow-up study on the prognostic significance of bone marrow micrometastasis. Our diagnostic plots suggest that it is appropriate to apply the piecewise Cox model to our data.
&lt;/p&gt;

&lt;p&gt;
In Chapter 2, we consider the time-dependent covariates proportional hazards (TDCPH) model with interval-censored (IC) relapse times under the distribution-free set-up. The partial likelihood approach is not applicable for IC data, thus we use the full likelihood approach. It turns out that under the TDCPH model with IC data, the semi-parametric MLE (SMLE) of the covariate effect under the standard generalized likelihood is not unique and is not consistent. In fact, the parameter under the TDCPH model with IC data is not identifiable unless some stronger assumptions are imposed. We propose a modification to the likelihood function so that its SMLE is unique. We show that the parameter is identifiable under certain regularity conditions. Under the regularity assumptions, our simulation studies suggest that such an SMLE is consistent and we also give a rigorous proof of the consistency. We apply the method to our cancer relapse time data and conclude that the bone marrow micrometastasis does not have a significant prognostic factor.
&lt;/p&gt;

&lt;p&gt;
In Chapter 3, we consider the semi-parametric estimation problem under the proportional haz- ards (PH) model with continuous time-dependent covariates and interval-censored data. We show that unlike the PH model with time-independent covariates, if the observable random vector takes on finitely many values, then the parameters in the model are not identifiable and there exist no consistent estimators of the parameters. We establish the identifiability condition for this issue. It provides a guideline for carrying out simulation studies and for the proof of consistency of certain semi-parametric estimators. Moreover, the naive extension of the generalized likelihood function does not lead to a consistent estimator. We propose two proper modifications of the generalized likelihood function, and they both yield consistent estimators. We also carry out simulation studies for these estimators. The covariate z(t) = u1(t &amp;gt; c)(t − c) will be discussed.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>September 27, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/160927"/>
        <published>2016-09-13T16:52:55-04:00</published>
        <updated>2016-09-13T16:52:55-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/160927</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, September 27, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Wei Qian (Rochester Institute of Technology)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Sufficient Dimension Reduction in High Dimension&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Sufficient dimension reduction (SDR) is known to be a powerful tool to achieve data reduction and data visualization in regression and classification problems. In this work, we study the  high-dimensional  SDR problems and propose a unified solution with regularization. Under the setting p»n, consistency results are investigated for important SDR methods. Special sparse structures of large predictor and error covariance are considered for potentially improved performance. In addition, the proposed approach is equipped with a new algorithm to efficiently solve the regularized objective functions and a data-driven procedure to determine structure dimension, without the need to inverse a large covariance matrix. Simulations and real data analysis are performed to demonstrate promising applications of our proposal in high-dimensional settings.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 18, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161018"/>
        <published>2016-10-06T23:36:51-04:00</published>
        <updated>2016-10-06T23:36:51-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161018</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 18, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Yang Feng (Columbia University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Community detection with nodal information&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Discovering community structure is one of the fundamental
issues in the study of networked data. Most existing community
detection approaches take merely edge information as inputs, and
deliver suboptimal results for networks with nodal covariates
available. Regarding those networks,  it is desirable to leverage
covariates information for the improvement of detection accuracy.
Towards this goal, we propose a flexible network model incorporating
nodal signals, and develop likelihood-based inference methods. We will
present a systematic study from both theoretical and practical
aspects. Our theoretical analysis demonstrates favorable asymptotic
properties of the proposed approach. We then derive practical
algorithms for the search of the theoretical estimators. Numerical
experiments show the effectiveness of our method in utilizing nodal
information across a variety of simulated and real networked datasets.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>October 25, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161025"/>
        <published>2016-09-13T17:15:56-04:00</published>
        <updated>2016-09-13T17:15:56-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161025</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, October 25, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Guan Yu (University at Buffalo)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Supervised Learning Incorporating Graphical Structure among Predictors&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
With the abundance of high dimensional data in various disciplines, regularization techniques are very popular these days. Despite the success of these techniques, some challenges remain. One challenge is the development of efficient methods incorporating structure information among predictors. Typically, the structure information among predictors can be modeled by the connectivity of an undirected graph using all predictors as nodes of the graph. In this talk, I will introduce an efficient regularization technique incorporating graphical structure information among predictors. Specifically, according to the undirected graph, we use a latent group lasso penalty to utilize the graph node-by-node. The predictors connected in the graph are encouraged to be selected jointly. This new regularization technique can be used for many supervised learning problems. For sparse regression, our new method using the proposed regularization technique includes adaptive Lasso, group Lasso, and ridge regression as special cases. Theoretical studies show that it enjoys model selection consistency and acquires tight finite sample bounds for estimation and prediction. For the multi-task learning problem, our proposed graph-guided multi-task method includes the popular l2,1-norm regularized multi-task learning method as a special case. Numerical studies using simulated datasets and the Alzheimer&amp;#039;s Disease Neuroimaging Initiative (ADNI) dataset also demonstrate the effectiveness of the proposed methods.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 1st, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161101"/>
        <published>2016-10-22T13:22:58-04:00</published>
        <updated>2016-10-22T13:22:58-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161101</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 1, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Yu Chen (ECE at Binghamton University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Enabling Smart Urban Surveillance at The Edge&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
The unprecedented urbanization and the staggering development of modern information technologies (IT) make Smart City attractive and achievable. Beyond the scope of traditional city services and applications, Smart Cities provide urban planners and policy makers proactively and timely information to obtain a dynamic and comprehensive understanding about the rhythm of our cities. Although Cloud Computing is considered the ideal platform for vast volume urban data storage and processing, the sustainability of Smart Cities necessitates the capability of computations and data analysis at the edge of the networks, especially for mission critical applications requiring real-time information fusion and on-site decision making. Fog Computing, an extension of Cloud Computing, enables heterogeneous mobile and smart computing devices at the edge to collaborate for instant decision making. In this talk, a smart urban surveillance platform will be introduced. Leveraging the underlay fog network, the real-time multi-target tracking task is accomplished. Compared with the Cloud Computing, the experimental results are very encouraging and validate the feasibility of smart urban surveillance for instant decision making using Fog Computing at the network edges.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 8th, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161108"/>
        <published>2016-11-06T21:17:45-04:00</published>
        <updated>2016-11-06T21:17:45-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161108</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 8, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Chengbin Deng (Geography at Binghamton University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Use of Big Geospatial Data for Better Mapping and Understanding Urban Environment&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
Understanding urban environments and their spatio-temporal changes is essential for regional and local planning and environmental management. With the emergence of large volumes of various earth observation data, it is important to take advantage of such datasets to improve land cover monitoring, especially the detection for the urbanization process. To reach this goal, a new image processing approach that employs all available historical Landsat images is proposed, and tested in Broome County, NY. Not only can this method support sub-pixel land cover mapping (as in traditional studies), but it also can derive the magnitude, timing and duration of urbanization (which cannot be provided in traditional studies). The Broome County experiment shows that the performance of this processing technique is comparable with, and even slightly better than, the existing data product. The results derived from machine learning methods are also compared, and the land cover information extracted from various satellite images is further utilized in support of urban sustainability studies.
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>November 15, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161115"/>
        <published>2016-11-09T20:21:57-04:00</published>
        <updated>2016-11-09T20:21:57-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161115</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, November 15, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Ziang Zhang (Electrical and Computer Engineering)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: Asymptotical Frequency Synchronization of Kuramoto Oscillators by Topology Evolution&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
From beats generated by cardiac pacemakers to the movement of planetary systems, the idea of synchronization of self-organizing oscillators that are coupled over networks links mathematics with natural phenomenon. Although this general idea was recognized decades ago, whether a particular system can synchronize and how to design controllers to help the synchronization remain unknown in many cases. Since thousands of independent generators across the nation, coupled by transmission lines, oscillate at the same 60Hz frequency, numerous studies suggest that a power system is a perfect example of coupled oscillators. However, several major barriers need to be removed in order to bridge the gap between classical phase coupled oscillators and a realistic power system.  This talk will discuss some recent findings from Dr. Zhang’s group. 
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
    <entry>
        <title>December 6, 2016</title>
        <link rel="alternate" type="text/html" href="https://www2.math.binghamton.edu/p/seminars/sml/161206"/>
        <published>2016-11-29T08:07:18-04:00</published>
        <updated>2016-11-29T08:07:18-04:00</updated>
        <id>https://www2.math.binghamton.edu/p/seminars/sml/161206</id>
        <summary>&lt;!-- EDIT1 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;span style='font-size:120%;'&gt;Statistical Machine Learning Seminar&lt;/span&gt;&lt;br/&gt;
Hosted by Department of Mathematical Sciences
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT2 PLUGIN_WRAP_END [0-] --&gt;

&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Date: Tuesday, December 6, 2016&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Time: 12:00-1:00&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Room: WH-100E&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Speaker: Yang Ning (Cornell University)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Title: A General Framework for High-Dimensional Inference and Multiple Testing&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- EDIT3 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_center wrap_box plugin_wrap&quot; style=&quot;width:80%;&quot;&gt;&lt;!-- EDIT5 PLUGIN_WRAP_START [0-] --&gt;&lt;div class=&quot;wrap_centeralign plugin_wrap&quot;&gt;
&lt;p&gt;
&lt;strong&gt;&lt;em&gt;Abstract&lt;/em&gt;&lt;/strong&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT6 PLUGIN_WRAP_END [0-] --&gt;
&lt;p&gt;
We consider the problem of how to control the false scientific discovery rate in high-dimensional models. Towards this goal, we focus on the uncertainty assessment for low dimensional components in high-dimensional models. Specifically, we propose a novel decorrelated likelihood based framework to obtain valid p-values for generic penalized M-estimators. Unlike most existing inferential methods which are tailored for individual models, our method provides a general framework for high-dimensional inference and is applicable to a wide variety of applications, including generalized linear models, graphical models, classifications and survival analysis. The proposed method provides optimal tests and confidence intervals. The extensions to general estimating equations are discussed. Finally, we show that the p-values can be combined to control the false discovery rate in multiple hypothesis testing. 
&lt;/p&gt;
&lt;/div&gt;&lt;!-- EDIT4 PLUGIN_WRAP_END [0-] --&gt;</summary>
    </entry>
</feed>
