教学文库网 - 权威文档分享云平台
您的当前位置:首页 > 精品文档 > 法律文档 >

Outperforming the Gibbs sampler empirical estimator for near

来源:网络收集 时间:2026-01-23
导读: Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs sample

Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with determini

Outperforming the Gibbs sampler empirical estimator for nearest neighbor random eldsPriscilla E. Greenwood Ian W. McKeague Florida State University University of British Columbia Wolfgang Wefelmeyer University of SiegenGiven a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random elds and to Gibbs samplers with deterministic sweep, but our approach applies to any sampler that uses reversible variable-at-a-time updating with deterministic sweep. The structure of the transition distribution of the sampler is exploited to construct further empirical estimators that are combined with the standard empirical estimator to reduce asymptotic variance. The extra computational cost is negligible. When the random eld is spatially homogeneous, symmetrizations of our estimator lead to further variance reduction. The performance of the estimators is evaluated in a simulation study of the Ising model.

Abstract

1 IntroductionSuppose we want to calculate the expectation of a bounded function f under a distribution on some space D. If D is of high dimension, or if is de ned indirectly, it may be R di cult to calculate the expectation f= f (x) (dx) analytically or even by numerical integration. The classical Monte Carlo method generates i.i.d. realizations X 0;:::; X n from, and approximates f by the empirical estimator X 1 n?1 0 En f= n f (X i ): i=0Research partially supported by NSF Grant ATM-9417528. All three authors were partially supported by NSERC, Canada. AMS 1991 subject classi cations. Primary: 62M40, 65U05; secondary: 60J05, 62G20, 62M05 Key words and Phrases. Markov chain Monte Carlo, Metropolis{Hastings algorithm, asymptotic relative e ciency, variance reduction, Ising model, parallel updating.1 2 3

Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with determini

The estimator is strongly consistent and asymptotically normal. Often, however, this Monte Carlo method is di cult to implement. One reason is that high dimensional distributions are hard to simulate. Additional di culties arise when is de ned indirectly, as in many Bayesian modeling situations, or only known up to a normalizing constant, as is usually the case for random elds. The Markov chain Monte Carlo method (MCMC) generates a Markov chain X 0; X 1;:::, 0 with as invariant law. Again, the empirical estimator En f is used to approximate f . If the chain is ergodic, the estimator is consistent; if the chain is geometrically ergodic, the estimator is asymptotically normal. Over the last ten years, the special MCMC scheme known as the Gibbs sampler has become an important tool for estimating features in high dimensional distributions . The method originated with the study of interacting particle systems, such as the Ising model in statistical physics, where it is known as the heat bath algorithm. The Gibbs sampler is also used in image analysis (Grenander, 1983, and Geman and Geman, 198

4), Bayesian statistics (Smith and Roberts, 1993), spatial statistics (Besag and Green, 1993, and Graham, 1994), expert systems (Pearl, 1987, Spiegelhalter et al., 1993), incomplete data problems (Tanner and Wong, 1987), and hierarchical models (Gelfand et al., 1990). There is a trade-o between speed of convergence of the Markov chain to stationarity and asymptotic variance of the empirical estimator. The asymptotic variance depends only on the stationary law of the chain. It is common to calculate the empirical estimator after a`burn-in' has reached approximate stationarity, and one may switch at that point from a sampler with good rate to a sampler giving small variance. Speed of convergence of various MCMC schemes has been studied by Schervish and Carlin (1992), Chan (1993), Tierney (1994), Ingrassia (1994) and Athreya et al. (1995). For general Markov chains, see Meyn and Tweedie (1993). Some comparisons of the rates of di erent MCMC schemes may be found, e.g., in Frigessi et al. (1993) and Amit and Grenander (1993). Grenander (1993, Ch. 7) compares random and deterministic sweep strategies in terms of rates. He notes (p. 394) that estimator variance can be more relevant than convergence rate as an optimality criterion. The question of which Markov chain sampling scheme minimizes the asymptotic variance of the empirical estimator is studied by Peskun (1973), Frigessi et al. (1992) and Green and Han (1992), among others. Here we consider a complementary question: Given a Markov chain sampling scheme, does the empirical estimator make best use of the sample? We will see that this is not so and will construct considerably better estimators in the case of the Gibbs sampler with deterministic sweep. Our approach will apply to any MCMC scheme with deterministic sweep and reversible local updating, in particular to local Metropolis{Hastings samplers with deterministic sweep. Speci cally, let D= V S with S a nite lattice and V a state space that may be discrete or continuous. The Gibbs sampler is described in terms of the one-dimensional conditional distributions ps (x?s; dxs ) of (dx), where x?s is obtained from x by omitting xs . A deterministic sweep through the lattice is xed by ordering the sites s1;:::; sd. The transition 2

Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restri …… 此处隐藏:41797字,全部文档内容请下载后查看。喜欢就下载吧 ……

Outperforming the Gibbs sampler empirical estimator for near.doc 将本文的Word文档下载到电脑,方便复制、编辑、收藏和打印
本文链接:https://www.jiaowen.net/wendang/1418659.html(转载请注明文章来源)
Copyright © 2020-2025 教文网 版权所有
声明 :本网站尊重并保护知识产权,根据《信息网络传播权保护条例》,如果我们转载的作品侵犯了您的权利,请在一个月内通知我们,我们会及时删除。
客服QQ:78024566 邮箱:78024566@qq.com
苏ICP备19068818号-2
Top
× 游客快捷下载通道(下载后可以自由复制和排版)
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
注:下载文档有可能出现无法下载或内容有问题,请联系客服协助您处理。
× 常见问题(客服时间:周一到周五 9:30-18:00)