Simultaneous penalized M-estimation of covariance matrices using geodesically convex optimization
arXiv: Methodology(2016)
摘要
A common assumption when sampling $p$-dimensional observations from $K$ distinct group is the equality of the covariance matrices. In this paper, we propose two penalized $M$-estimation approaches for the estimation of the covariance or scatter matrices under the broader assumption that they may simply be close to each other, and hence roughly deviate from some positive definite The first approach begins by generating a pooled $M$-estimator of scatter based on all the data, followed by a penalised $M$-estimator of scatter for each group, with the penalty term chosen so that the individual scatter matrices are shrunk towards the pooled scatter matrix. In the second approach, we minimize the sum of the individual group $M$-estimation cost functions together with an additive joint penalty term which enforces some similarity between the individual scatter estimators, i.e. shrinkage towards a mutual center. In both approaches, we utilize the concept of geodesic convexity to prove the existence and uniqueness of the penalized solution under general conditions. We consider three specific penalty functions based on the Euclidean, the Riemannian, and the Kullback-Leibler distances. In the second approach, the distance based penalties are shown to lead to estimators of the mutual center that are related to the arithmetic, the Riemannian and the harmonic means of positive definite matrices, respectively. A penalty based on an ellipticity measure is also considered which is particularly useful for shape matrix estimators. Fixed point equations are derived for each penalty function and the benefits of the estimators are illustrated in regularized discriminant analysis problem.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络