History

Please fill in your query. A complete syntax description you will find on the General Help page.
The aim of the article is to analyze a new algorithm for producing samples from a distribution $π$, when direct sampling is not possible. The authors propose an adaptive Metropolis-Hastings algorithm. At iteration $i$, it uses a proposal function $q_i$ that depends on all earlier proposed states, except the current, and a Metropolis-Hastings-like acceptance step. It requires that the supremum of $π/q_i$ is finite (Doeblin condition). In contrast, the adaptive independent Metropolis-Hastings algorithm of {\it J. Gåsemyr} [Scand. J. Stat. 30, 159‒173 (2003; Zbl 1038.65005)] requires that the supremum of the ratio $f/q_i$ is known, where $f \propto π$, and this supremum is used in the algorithm. The authors establish that the limiting density $π$ conditioned on the history is invariant for the algorithm. They prove a bound on the convergence that depends on the supremum of $π/q_i$: the convergence is geometric provided a strong Doeblin condition is satisfied (all the proposal distributions have uniformly heavier tails than the target distribution $π$). In this case, the algorithm gives exact samples within a finite numbers of iterations with probability arbitrarily close to $1$. The bound is similar to the convergence bounds for the Metropolis-Hastings given by {\it L. Holden} [Stat. Probab. Lett. 39, No.~4, 371‒377 (1998; Zbl 0914.60043)]. The algorithm is tested in four examples, that are difficult for MCMC methods (three with several modes and one with a heavy tail distribution). Three examples are nonparametic, and one is parametric. The proposed algorithm performs better than the standard MCMC alternatives used for comparison, even when the Doeblin condition is not satisfied.