×

Empirical Bayes methods for combining likelihoods. (With discussion). (English) Zbl 0868.62018

Summary: Suppose that several independent experiments are observed, each one yielding a likelihood \(L_k(\theta_k)\) for a real-valued parameter of interest \(\theta_k\). For example, \(\theta_k\) might be the log-odds ratio for a \(2\times 2\) table relating to the \(k\)th population in a series of medical experiments. This article concerns the following empirical Bayes question: How can we combine all of the likelihoods \(L_k\) to get an interval estimate for any one of the \(\theta_k\)’s, say \(\theta_1\)?
The results are presented in the form of a realistic computational scheme that allows model building and model checking in the spirit of a regression analysis. No special mathematical forms are required for the priors or the likelihoods. This scheme is designed to take advantage of recent methods that produce approximate numerical likelihoods \(L_k(\theta_k)\) even in very complicated situations, with all nuisance parameters eliminated. The empirical Bayes likelihood theory is extended to situations where the \(\theta_k\)’s have a regression structure as well as an empirical Bayes relationship. Most of the discussion is presented in terms of a hierarchical Bayes model and concerns how such a model can be implemented without requiring large amounts of Bayesian input. Frequentist approaches, such as bias correction and robustness, play a central role in the methodology.

MSC:

62C12 Empirical decision procedures; empirical Bayes procedures
62F25 Parametric tolerance and confidence regions
62A01 Foundations and philosophical topics in statistics
62F15 Bayesian inference
PDFBibTeX XMLCite
Full Text: DOI