Acceleration of the EM algorithm

WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS(2023)

引用 5|浏览8
暂无评分
摘要
The expectation-maximization (EM) algorithm is a well-known iterative algorithm for finding maximum likelihood estimates from incomplete data and is used in several statistical models with latent variables and missing data. The algorithm also exhibits a monotonic increase in a likelihood function and satisfies parameter constraints for its convergence. The popularity of the EM algorithm can be attributed to its stable convergence, simple implementation and flexibility in interpreting data incompleteness. Despite these computational advantages, the algorithm is linear convergent and suffers from very slow convergence when a statistical model has many parameters and a high proportion of missing data. Various algorithms have been proposed to accelerate the convergence of the EM algorithm. We introduce the acceleration of the EM algorithm using root-finding and vector extrapolation algorithms. The root-finding algorithms include Aitken's method and the Newton-Raphson, quasi-Newton and conjugate gradient algorithms. These algorithms with faster convergence rates allow the EM algorithm to be sped up. The vector extrapolation algorithms transform the sequence of estimates from the EM algorithm into a fast convergent sequence and can accelerate the convergence without modifying the EM algorithm. We describe the derivation of these acceleration algorithms and attempt to apply them to two examples. This article is categorized under: Statistical and Graphical Methods of Data Analysis > EM Algorithm
更多
查看译文
关键词
acceleration of convergence,conjugate gradient algorithm,EM algorithm,Newton–Raphson algorithm,quasi‐Newton algorithm,squared extrapolation algorithm,vector ε algorithm,vector extrapolation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要