首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider conditional exact tests of factor effects in designed experiments for discrete response variables. Similarly to the analysis of contingency tables, a Markov chain Monte Carlo method can be used for performing exact tests, when large-sample approximations are poor and the enumeration of the conditional sample space is infeasible. For designed experiments with a single observation for each run, we formulate log-linear or logistic models and consider a connected Markov chain over an appropriate sample space. In particular, we investigate fractional factorial designs with 2p-q2p-q runs, noting correspondences to the models for 2p-q2p-q contingency tables.  相似文献   

2.
Summary.  The structural theoretical framework for the analysis of duration of unemployment has been the optimal job search model. Recent advances in computational techniques in Bayesian inference now facilitate the analysis of incomplete data sets and the recovery of structural model parameters. The paper uses these methods on a UK data set of the long-term unemployed to illustrate how the optimal job search model can be adapted to model the effects of an active labour market policy. Without such an adaptation our conclusion is that the simple optimal job search model may not fit empirical unemployment data and could thus lead to a misspecified econometric model and incorrect parameter estimates.  相似文献   

3.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

4.
In this paper we present a review of population-based simulation for static inference problems. Such methods can be described as generating a collection of random variables {X n } n=1,…,N in parallel in order to simulate from some target density π (or potentially sequence of target densities). Population-based simulation is important as many challenging sampling problems in applied statistics cannot be dealt with successfully by conventional Markov chain Monte Carlo (MCMC) methods. We summarize population-based MCMC (Geyer, Computing Science and Statistics: The 23rd Symposium on the Interface, pp. 156–163, 1991; Liang and Wong, J. Am. Stat. Assoc. 96, 653–666, 2001) and sequential Monte Carlo samplers (SMC) (Del Moral, Doucet and Jasra, J. Roy. Stat. Soc. Ser. B 68, 411–436, 2006a), providing a comparison of the approaches. We give numerical examples from Bayesian mixture modelling (Richardson and Green, J. Roy. Stat. Soc. Ser. B 59, 731–792, 1997).  相似文献   

5.
This paper presents the Bayesian analysis of a semiparametric regression model that consists of parametric and nonparametric components. The nonparametric component is represented with a Fourier series where the Fourier coefficients are assumed a priori to have zero means and to decay to 0 in probability at either algebraic or geometric rates. The rate of decay controls the smoothness of the response function. The posterior analysis automatically selects the amount of smoothing that is coherent with the model and data. Posterior probabilities of the parametric and semiparametric models provide a method for testing the parametric model against a non-specific alternative. The Bayes estimator's mean integrated squared error compares favourably with the theoretically optimal estimator for kernel regression.  相似文献   

6.
The article considers a Gaussian model with the mean and the variance modeled flexibly as functions of the independent variables. The estimation is carried out using a Bayesian approach that allows the identification of significant variables in the variance function, as well as averaging over all possible models in both the mean and the variance functions. The computation is carried out by a simulation method that is carefully constructed to ensure that it converges quickly and produces iterates from the posterior distribution that have low correlation. Real and simulated examples demonstrate that the proposed method works well. The method in this paper is important because (a) it produces more realistic prediction intervals than nonparametric regression estimators that assume a constant variance; (b) variable selection identifies the variables in the variance function that are important; (c) variable selection and model averaging produce more efficient prediction intervals than those obtained by regular nonparametric regression.  相似文献   

7.
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large.  相似文献   

8.
We propose a Bayesian nonparametric instrumental variable approach under additive separability that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specification with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV. Supplementary materials for this article are available online.  相似文献   

9.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

10.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   

11.
In this article, we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data, and illustrate their use on two financial time series datasets.  相似文献   

12.
Linear models constitute the primary statistical technique for any experimental science. A major topic in this area is the detection of influential subsets of data, that is, of observations that are influential in terms of their effect on the estimation of parameters in linear regression or of the total population parameters. Numerous studies exist on radiocarbon dating which propose a value consensus and remove possible outliers after the corresponding testing. An influence analysis for the value consensus from a Bayesian perspective is developed in this article.  相似文献   

13.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

14.
Common loss functions used for the restoration of grey scale images include the zero–one loss and the sum of squared errors. The corresponding estimators, the posterior mode and the posterior marginal mean, are optimal Bayes estimators with respect to their way of measuring the loss for different error configurations. However, both these loss functions have a fundamental weakness: the loss does not depend on the spatial structure of the errors. This is important because a systematic structure in the errors can lead to misinterpretation of the estimated image. We propose a new loss function that also penalizes strong local sample covariance in the error and we discuss how the optimal Bayes estimator can be estimated using a two-step Markov chain Monte Carlo and simulated annealing algorithm. We present simulation results for some artificial data which show improvement with respect to small structures in the image.  相似文献   

15.
We propose a simulation-based approach to decision theoretic Bayesian optimal design. The underlying probability model is a population pharmacokinetic model which allows for correlated responses (drug concentrations) and patient-to-patient heterogeneity. We consider the problem of choosing sampling times for the anticancer agent paclitaxel, using criteria related to the total area under the curve, the time above a critical threshold and the sampling cost.  相似文献   

16.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

17.
Often the dependence in multivariate survival data is modeled through an individual level effect called the frailty. Due to its mathematical simplicity, the gamma distribution is often used as the frailty distribution for hazard modeling. However, it is well known that the gamma frailty distribution has many drawbacks. For example, it weakens the effect of covariates. In addition, in the presence of a multilevel model, overall frailty comes from several levels. To overcome such drawbacks, more heavy-tailed distributions are needed to model the frailty distribution in order to incorporate extra variability. In this article, we develop a class of log-skew-t distributions for the frailty. This class includes the log-normal distribution along with many other heavy tailed distributions, e.g., log-Cauchy, log normal, and log-t as special cases.

Conditional on the frailty, the survival times are assumed to be independent with proportional hazard structure. The modeling process is then completed by assuming multilevel frailty-effects. Instead of tuning a strict parameterization of the baseline hazard function, we consider the partial likelihood approach and thus leave the baseline function unspecified. By eliminating the hazard, the pre-specification and computation are simplified considerably.  相似文献   

18.
This article considers experimental costs, besides power evaluation, in order to determine the sample size of an experiment. We focus on the use of standard tools of decision theory in the context of sample size determination. The loss function is defined, from the perspective of an experimenter which adopts the classical frequentist approach, and the risk function is computed. Then, we show the behavior of the risk function in the two-sample t-test, for a small sample experimental setting, with a medium-sized sample, and with large samples. Moreover, an objective criterion for a convenient sample size choice is introduced. Finally, a practical example of sample size determination, which also considers risk computation, is shown.  相似文献   

19.
We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.  相似文献   

20.
Prediction of possible cliff erosion at some future date is fundamental to coastal planning and shoreline management, for example to avoid development in vulnerable areas. Historically, to predict cliff recession rates deterministic methods were used. More recently, recession predictions have been expressed in probabilistic terms. However, to date, only simplistic models have been developed. We consider the cliff erosion along the Holderness Coast. Since 1951 a monitoring program has been started in 118 stations along the coast, providing an invaluable, but often missing, source of information. We build hierarchical random effect models, taking account of the known dynamics of the process and including the missing information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号