首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes new two-sided monitoring algorithms for detecting the presence of first order residual autocorrelations in Dynamic Normal Models. The methodology uses a Bayesian decision approach with loss function which takes into account the run-length of the process. The power and mean run-length of the proposed algorithms are analysed by Monte Carlo methods. The results obtained improve those corresponding to the monitoring algorithm for residual autocorrelations proposed in Gargallo and Salvador [2003. Monitoring residual autocorrelations in dynamic linear models. Comm. Statist. Simulation Comput. 32(4), 1079–1104.] with respect to the run-length, and also exhibit more homogeneous behaviour.  相似文献   

2.
Motivated by the need to sequentially design experiments for the collection of data in batches or blocks, a new pseudo-marginal sequential Monte Carlo algorithm is proposed for random effects models where the likelihood is not analytic, and has to be approximated. This new algorithm is an extension of the idealised sequential Monte Carlo algorithm where we propose to unbiasedly approximate the likelihood to yield an efficient exact-approximate algorithm to perform inference and make decisions within Bayesian sequential design. We propose four approaches to unbiasedly approximate the likelihood: standard Monte Carlo integration; randomised quasi-Monte Carlo integration, Laplace importance sampling and a combination of Laplace importance sampling and randomised quasi-Monte Carlo. These four methods are compared in terms of the estimates of likelihood weights and in the selection of the optimal sequential designs in an important pharmacological study related to the treatment of critically ill patients. As the approaches considered to approximate the likelihood can be computationally expensive, we exploit parallel computational architectures to ensure designs are derived in a timely manner.  相似文献   

3.
The computational demand required to perform inference using Markov chain Monte Carlo methods often obstructs a Bayesian analysis. This may be a result of large datasets, complex dependence structures, or expensive computer models. In these instances, the posterior distribution is replaced by a computationally tractable approximation, and inference is based on this working model. However, the error that is introduced by this practice is not well studied. In this paper, we propose a methodology that allows one to examine the impact on statistical inference by quantifying the discrepancy between the intractable and working posterior distributions. This work provides a structure to analyse model approximations with regard to the reliability of inference and computational efficiency. We illustrate our approach through a spatial analysis of yearly total precipitation anomalies where covariance tapering approximations are used to alleviate the computational demand associated with inverting a large, dense covariance matrix.  相似文献   

4.
Degradation tests are especially difficult to conduct for items with high reliability. Test costs, caused mainly by prolonged item duration and item destruction costs, establish the necessity of sequential degradation test designs. We propose a methodology that sequentially selects the optimal observation times to measure the degradation, using a convenient rule that maximizes the inference precision and minimizes test costs. In particular our objective is to estimate a quantile of the time to failure distribution, where the degradation process is modelled as a linear model using Bayesian inference. The proposed sequential analysis is based on an index that measures the expected discrepancy between the estimated quantile and its corresponding prediction, using Monte Carlo methods. The procedure was successfully implemented for simulated and real data.  相似文献   

5.
Riemann manifold Hamiltonian Monte Carlo (RMHMC) has the potential to produce high-quality Markov chain Monte Carlo output even for very challenging target distributions. To this end, a symmetric positive definite scaling matrix for RMHMC is proposed. The scaling matrix is obtained by applying a modified Cholesky factorization to the potentially indefinite negative Hessian of the target log-density. The methodology is able to exploit the sparsity of the Hessian, stemming from conditional independence modeling assumptions, and thus admit fast implementation of RMHMC even for high-dimensional target distributions. Moreover, the methodology can exploit log-concave conditional target densities, often encountered in Bayesian hierarchical models, for faster sampling and more straightforward tuning. The proposed methodology is compared to alternatives for some challenging targets and is illustrated by applying a state-space model to real data.  相似文献   

6.
Suppose there are k(>= 2) treatments and each treatment is a Bernoulli process with binomial sampling. The problem of selecting a random-sized subset which contains the treatment with the largest survival probability (reliability or probability of success) is considered. Based on the ideas from both classical approaches and general Bayesian statistical decision approach, a new subset selection procedure is proposed to solve this kind of problem in both balanced and unbalanced designs. Comparing with the classical procedures, the proposed procedure has a significantly smaller selected subset. The optimal properties and performance of it were examined. The methods of selecting and fitting the priors and the results of Monte Carlo simulations on selected important cases are also studied.  相似文献   

7.
In this article, we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC), and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets and simulation studies provide evidence that the extra analytical work involved in Hamiltonian Monte Carlo algorithms is compensated by a more efficient exploration of the parameter space.  相似文献   

8.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

9.
A Bayesian approach to modeling a rich class of nonconjugate problems is presented. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The result is a general strategy for obtaining marginal posterior densities under changing specification of the model error densities and related prior densities. We illustrate the approach in a nonlinear regression setting, comparing the merits of three candidate error distributions.  相似文献   

10.
Data with censored initiating and terminating times arises quite frequently in acquired immunodeficiency syndrome (AIDS) epidemiologic studies. Analysis of such data involves a complicated bivariate likelihood, which is difficult to deal with computationally. Bayesian analysis, op the other hand, presents added complexities that have yet to be resolved. By exploiting the simple form of a complete data likelihood and utilizing the power of a Markov Chain Monte Carlo (MCMC) algorithm, this paper presents a methodology for fitting Bayesian regression models to such data. The proposed methods extend the work of Sinha (1997), who considered non-parametric Bayesian analysis of this type of data. The methodology is illustiated with an application to a cohort of HIV infected hemophiliac patients.  相似文献   

11.
Fang Y  Wu H  Zhu LX 《Statistica Sinica》2011,21(3):1145-1170
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.  相似文献   

12.
We develop Mean Field Variational Bayes methodology for fast approximate inference in Bayesian Generalized Extreme Value additive model analysis. Such models are useful for flexibly assessing the impact of continuous predictor variables on sample extremes. The new methodology allows large Bayesian models to be fitted and assessed without the significant computing costs of Markov Chain Monte Carlo methods. We illustrate our new methodology with maximum rainfall data from the Sydney, Australia, hinterland. Comparisons are made between the Mean Field Variational Bayes and Markov Chain Monte Carlo approaches.  相似文献   

13.
A Bayesian approach based on the Markov Chain Monte Carlo technique is proposed for the non-homogeneous gamma process with power-law shape function. Vague and informative priors, formalized on some quantities having a “physical” meaning, are provided. Point and interval estimation of process parameters and some functions thereof are developed, as well as prediction on some observable quantities that are useful in defining the maintenance strategy is proposed. Some useful approximations are derived for the conditional and unconditional mean and median of the residual life to reduce computational time. Finally, the proposed approach is applied to a real dataset.  相似文献   

14.
The problem of selecting good populations out of k normal populations is considered in a Bayesian framework under exchangeable normal priors and additive loss functions. Some basic approximations to the Bayes rules are discussed. These approximations suggest that some well-known classical rules are "approximate" Bayes rules. Especially, it is shown that Gupta-type rules are extended Bayes with respect to a family of the exchangeable normal priors for any bounded and additive loss function. Furthermore, for a simple loss function, the results of a Monte Carlo comparison of Gupta-type rules and Seal-type rules are presented. They indicate that, in general, Gupta-type rules perform better than Seal-type rules  相似文献   

15.
Several approximations to the exact distribution of the Kruskal-Wallis test' statistic presently exist. There approximations can roughly be grouped into two classes: (i) computationally difficult with good accuracy, and (ii) easy to compute but not as accurate as the first class. The purpose of this paper is to introduce two nev approximations (one in the latter class and one which is computationally more involved)y and to compare these with other popular approximations. These comparisons use exact probabilities where available and Monte Carlo simulation otherwise.  相似文献   

16.
This paper presents a Bayesian-hypothesis-testing-based methodology for model validation and confidence extrapolation under uncertainty, using limited test data. An explicit expression of the Bayes factor is derived for the interval hypothesis testing. The interval method is compared with the Bayesian point null hypothesis testing approach. The Bayesian network with Markov Chain Monte Carlo simulation and Gibbs sampling is explored for extrapolating the inference from the validated domain at the component level to the untested domain at the system level. The effect of the number of experiments on the confidence in the model validation decision is investigated. The probabilities of Type I and Type II errors in decision-making during the model validation and confidence extrapolation are quantified. The proposed methodologies are applied to a structural mechanics problem. Numerical results demonstrate that the Bayesian methodology provides a quantitative approach to facilitate rational decisions in model validation and confidence extrapolation under uncertainty.  相似文献   

17.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   

18.
We develop a new class of reference priors for linear models with general covariance structures. A general Markov chain Monte Carlo algorithm is also proposed for implementing the computation. We present several examples to demonstrate the results: Bayesian penalized spline smoothing, a Bayesian approach to bivariate smoothing for a spatial model, and prior specification for structural equation models.  相似文献   

19.
In this paper, we develop Bayesian methodology and computational algorithms for variable subset selection in Cox proportional hazards models with missing covariate data. A new joint semi-conjugate prior for the piecewise exponential model is proposed in the presence of missing covariates and its properties are examined. The covariates are assumed to be missing at random (MAR). Under this new prior, a version of the Deviance Information Criterion (DIC) is proposed for Bayesian variable subset selection in the presence of missing covariates. Monte Carlo methods are developed for computing the DICs for all possible subset models in the model space. A Bone Marrow Transplant (BMT) dataset is used to illustrate the proposed methodology.  相似文献   

20.
Powerful entropy-based tests for normality, uniformity and exponentiality have been well addressed in the statistical literature. The density-based empirical likelihood approach improves the performance of these tests for goodness-of-fit, forming them into approximate likelihood ratios. This method is extended to develop two-sample empirical likelihood approximations to optimal parametric likelihood ratios, resulting in an efficient test based on samples entropy. The proposed and examined distribution-free two-sample test is shown to be very competitive with well-known nonparametric tests. For example, the new test has high and stable power detecting a nonconstant shift in the two-sample problem, when Wilcoxon’s test may break down completely. This is partly due to the inherent structure developed within Neyman-Pearson type lemmas. The outputs of an extensive Monte Carlo analysis and real data example support our theoretical results. The Monte Carlo simulation study indicates that the proposed test compares favorably with the standard procedures, for a wide range of null and alternative distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号