Bandyopadhyay, Antar; Aldous, David J. How to combine fast heuristic Markov chain Monte Carlo with slow exact sampling. (English) Zbl 0987.60084 Electron. Commun. Probab. 6, Paper No. 8, 79-89 (2001). Summary: Given a probability law \(\pi\) on a set \(S\) and a function \(g : S \rightarrow R\), suppose one wants to estimate the mean \(\bar{g} = \int g d\pi\). The Markov chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution \(\pi\). Typically one has no a priori bounds on the chain’s mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for \(\bar{g}\). But suppose there is also a separate method which (slowly) gives samples exactly from \(\pi\). Using \(n\) exact samples, one could immediately get a confidence interval of length \(O(n^{-1/2})\). But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these \(n\) chains for \(m\) steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is sufficiently small relative to \(m/n\), have length \(O(n^{-1}\log n)\) with high probability. Cited in 1 Document MSC: 60J10 Markov chains (discrete-time Markov processes on discrete state spaces) 62M05 Markov processes: estimation; hidden Markov models 68W20 Randomized algorithms Keywords:confidence interval; exact sampling; Markov chain Monte Carlo method PDFBibTeX XMLCite \textit{A. Bandyopadhyay} and \textit{D. J. Aldous}, Electron. Commun. Probab. 6, Paper No. 8, 79--89 (2001; Zbl 0987.60084) Full Text: DOI arXiv EuDML EMIS