Data Science Seminar
Hosted by Department of Mathematical Sciences
Inference concerning Gaussian graphical models involves pairwise conditional dependencies on Gaussian random variables. In such a situation, regularization of a certain form is often employed to treat an overparameterized model, imposing challenges to inference. The common practice of inference uses either a regularized model, as in inference after model selection, or bias-reduction known as “de-bias”. While the first ignores statistical uncertainty inherent in regularization, the second reduces the bias inbred in regularization at an expense of increased variance. In this paper, we propose a constrained maximum likelihood method for inference, with a focus of alleviating the impact of regularization on inference. Particularly, for composite hypotheses, we unregularize hypothesized parameters whereas regularizing nuisance parameters through a $L_0$-constraint controlling their degree of sparseness. This approach is an analogy of semiparametric likelihood inference in a high-dimensional situation. On this ground, we derive conditions under which the asymptotic distributions of the constrained likelihood ratio and the maximum likelihood estimate are established, permitting a graph’s dimension increasing with the sample size. Interestingly, the corresponding distribution of the likelihood ratio is the chi-square or normal, depending on if the co-dimension of a test is finite or increases with the sample size. This goes beyond the classical Wilks phenomenon. Numerically, we demonstrate that the proposed method performs well for various types of graphs. Finally, we apply the proposed method to infer linkages in brain network analysis based on MRI data, to contrast Alzheimer’s disease patients against healthy subjects.