History

Please fill in your query. A complete syntax description you will find on the General Help page.
On the convergence of Monte Carlo maximum likelihood calculations. (English)
J. R. Stat. Soc., Ser. B 56, No.1, 261-274 (1994).
Summary: Monte Carlo maximum likelihood for normalized families of distributions can be used for an extremely broad class of models. Given any family $\{ h\sb θ:θ\in Θ\}$ of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the functions to integrate to 1 can be approximated by Monte Carlo simulation, the only regularity conditions being a compactification of the parameter space such that the evaluation maps $θ\mapsto h\sb θ(x)$ remain continuous. Then with probability 1 the Monte Carlo approximant to the log-likelihood hypoconverges to the exact log- likelihood, its maximizer converges to the exact maximum likelihood estimate, approximations to profile likelihoods hypoconverge to the exact profile and level sets of the approximate likelihood (support regions) converge to the exact sets (in Painlevé-Kuratowski set convergence). The same results hold when there are missing data if a Wald-type integrability condition is satisfied. Asymptotic normality of the Monte Carlo error and convergence of the Monte Carlo approximation to the observed Fisher information are also shown.