首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Markov chain Monte Carlo (MCMC) algorithms have been shown to be useful for estimation of complex item response theory (IRT) models. Although an MCMC algorithm can be very useful, it also requires care in use and interpretation of results. In particular, MCMC algorithms generally make extensive use of priors on model parameters. In this paper, MCMC estimation is illustrated using a simple mixture IRT model, a mixture Rasch model (MRM), to demonstrate how the algorithm operates and how results may be affected by some commonly used priors. Priors on the probabilities of mixtures, label switching, model selection, metric anchoring, and implementation of the MCMC algorithm using WinBUGS are described, and their effects illustrated on parameter recovery in practical testing situations. In addition, an example is presented in which an MRM is fitted to a set of educational test data using the MCMC algorithm and a comparison is illustrated with results from three existing maximum likelihood estimation methods.  相似文献   

2.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

3.
Abstract

This article discusses optimal confidence estimation for the geometric parameter and shows how different criteria can be used for evaluating confidence sets within the framework of tail functions theory. The confidence interval obtained using a particular tail function is studied and shown to outperform others, in the sense of having smaller width or expected width under a specified weight function. It is also shown that it may not be possible to find the most powerful test regarding the parameter using the Neyman-Pearson lemma. The theory is illustrated by application to a fecundability study.  相似文献   

4.
A meta-analysis of a continuous outcome measure may involve missing standard errors. This is not a problem depending on assumptions made about the population standard deviation. Multiple imputation can be used to impute missing values while allowing for uncertainty in the imputation. Markov chain Monte Carlo simulation is a multiple imputation technique for generating posterior predictive distributions for missing data. We present an example of imputing missing variances using WinBUGS. The example highlights the importance of checking model assumptions, whether for missing or observed data.  相似文献   

5.
Summary.  The paper describes how to use hierarchical models to assess the reliability of and agreement between two or more types of measurement device. The idea is illustrated by fitting a linear model with nested random effects to a set of data that was obtained from the calibration of two samples of extremely low frequency magnetic field meters. The paper focuses on the formulation of a suitable model that accounts for the various aspects of the calibration protocol and the subsequent interpretation of the parameter estimates. The approach is very flexible and can easily be tuned to the various needs arising in the measurement agreement framework. It can be seen as an extension of the common practice of using a one-way random-effects model to retrieve a measure of agreement.  相似文献   

6.
Markov chain Monte Carlo techniques have revolutionized the field of Bayesian statistics. Their power is so great that they can even accommodate situations in which the structure of the statistical model itself is uncertain. However, the analysis of such trans-dimensional (TD) models is not easy and available software may lack the flexibility required for dealing with the complexities of real data, often because it does not allow the TD model to be simply part of some bigger model. In this paper we describe a class of widely applicable TD models that can be represented by a generic graphical model, which may be incorporated into arbitrary other graphical structures without significantly affecting the mechanism of inference. We also present a decomposition of the reversible jump algorithm into abstract and problem-specific components, which provides infrastructure for applying the method to all models in the class considered. These developments represent a first step towards a context-free method for implementing TD models that will facilitate their use by applied scientists for the practical exploration of model uncertainty. Our approach makes use of the popular WinBUGS framework as a sampling engine and we illustrate its use via two simple examples in which model uncertainty is a key feature.  相似文献   

7.
This paper surveys various shrinkage, smoothing and selection priors from a unifying perspective and shows how to combine them for Bayesian regularisation in the general class of structured additive regression models. As a common feature, all regularisation priors are conditionally Gaussian, given further parameters regularising model complexity. Hyperpriors for these parameters encourage shrinkage, smoothness or selection. It is shown that these regularisation (log-) priors can be interpreted as Bayesian analogues of several well-known frequentist penalty terms. Inference can be carried out with unified and computationally efficient MCMC schemes, estimating regularised regression coefficients and basis function coefficients simultaneously with complexity parameters and measuring uncertainty via corresponding marginal posteriors. For variable and function selection we discuss several variants of spike and slab priors which can also be cast into the framework of conditionally Gaussian priors. The performance of the Bayesian regularisation approaches is demonstrated in a hazard regression model and a high-dimensional geoadditive regression model.  相似文献   

8.
In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al. , 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.  相似文献   

9.
In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al. , 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.  相似文献   

10.
Summary.  We present a Bayesian evidence synthesis model combining data on seroprevalence, seroconversion and tests of recent infection, to produce estimates of current incidence of toxoplasmosis in the UK. The motivation for the study was the need for an estimate of current average incidence in the UK, with a realistic assessment of its uncertainty, to inform a decision model for a national screening programme to prevent congenital toxoplasmosis. The model has a hierarchical structure over geographic region, a random-walk model for temporal effects and a fixed age effect, with one or more types of data informing the regional estimates of incidence. Inference is obtained by using Markov chain Monte Carlo simulations. A key issue in the synthesis of evidence from multiple sources is model selection and the consistency of different types of evidence. Alternative models of incidence are compared by using the deviance information criterion, and we find that temporal effects are region specific. We assess the consistency of the various forms of evidence by using cross-validation where practical, and posterior and mixed prediction otherwise, and we discuss how these measures can be used to assess different aspects of consistency in a complex evidence structure. We discuss the contribution of the various forms of evidence to estimated current average incidence.  相似文献   

11.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

12.
In some clinical trials and epidemiologic studies, investigators are interested in knowing whether the variability of a biomarker is independently predictive of clinical outcomes. This question is often addressed via a naïve approach where a sample-based estimate (e.g., standard deviation) is calculated as a surrogate for the “true” variability and then used in regression models as a covariate assumed to be free of measurement error. However, it is well known that the measurement error in covariates causes underestimation of the true association. The issue of underestimation can be substantial when the precision is low because of limited number of measures per subject. The joint analysis of survival data and longitudinal data enables one to account for the measurement error in longitudinal data and has received substantial attention in recent years. In this paper we propose a joint model to assess the predictive effect of biomarker variability. The joint model consists of two linked sub-models, a linear mixed model with patient-specific variance for longitudinal data and a full parametric Weibull distribution for survival data, and the association between two models is induced by a latent Gaussian process. Parameters in the joint model are estimated under Bayesian framework and implemented using Markov chain Monte Carlo (MCMC) methods with WinBUGS software. The method is illustrated in the Ocular Hypertension Treatment Study to assess whether the variability of intraocular pressure is an independent risk of primary open-angle glaucoma. The performance of the method is also assessed by simulation studies.  相似文献   

13.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

14.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

15.
The purpose of this article is to discuss the application of nonlinear models to price decisions in the framework of rating-based product preference models. As revealed by a comparative simulation study, when a nonlinear model is the true model, the traditional linear model fails to properly describe the true pattern. It appears to be unsatisfactory in comparison with nonlinear models, such as logistic and natural spline, which offer some advantages, the most important being the ability to take into account more than just linear and/or monotonic effects. Consequently, when we model the product preference with a nonlinear model, we are potentially able to detect its ‘best’ price level, i.e., the price at which consumer preference towards a given attribute is at its maximum. From an application point of view, this approach is very flexible in price decisions and may produce original managerial suggestions which might not be revealed by traditional methods.  相似文献   

16.
This paper considers quantile regression models using an asymmetric Laplace distribution from a Bayesian point of view. We develop a simple and efficient Gibbs sampling algorithm for fitting the quantile regression model based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the resulting Gibbs sampler can be accomplished by sampling from either normal or generalized inverse Gaussian distribution. We also discuss some possible extensions of our approach, including the incorporation of a scale parameter, the use of double exponential prior, and a Bayesian analysis of Tobit quantile regression. The proposed methods are illustrated by both simulated and real data.  相似文献   

17.
Noninferiority testing in clinical trials is commonly understood in a Neyman-Pearson framework, and has been discussed in a Bayesian framework as well. In this paper, we discuss noninferiority testing in a Fisherian framework, in which the only assumption necessary for inference is the assumption of randomization of treatments to study subjects. Randomization plays an important role in not only the design but also the analysis of clinical trials, no matter the underlying inferential field. The ability to utilize permutation tests depends on assumptions around exchangeability, and we discuss the possible uses of permutation tests in active control noninferiority analyses. The other practical implications of this paper are admittedly minor but lead to better understanding of the historical and philosophical development of active control noninferiority testing. The conclusion may also frame discussion of other complicated issues in noninferiority testing, such as the role of an intention to treat analysis.  相似文献   

18.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

19.
This paper uses Bayesian methods via WinBUGS to model round robin play in the 2004 Super 12 Rugby Union competition in order to explore home advantage and how that impacts the outcome of the competition. The scores from the games are decomposed into counts of converted and unconverted tries, penalties and drop goals and are modelled as Poisson random variables with a log link. The dependent variables are the offensive and defensive capabilities of the teams along with terms for home advantage. The model is used to ascertain the effects of home advantage on the standings of the teams in the competition and, from that, how fairness in the competition could be improved.  相似文献   

20.
This article is addressed to those interested in how Bayesian approaches can be brought to bear on research and development planning and management issues. It provides a conceptual framework for estimating the value of information to environmental policy decisions. The methodology is applied to assess the expected value of research concerning the effects of acidic deposition on forests. To calculate the expected value of research requires modeling the possible actions of policymakers under conditions of uncertainty. Information is potentially valuable only if it leads to actions that differ from the actions that would be taken without the information. The relevant issue is how research on forest effects would change choices of emissions controls from those that would be made in the absence of such research. The approach taken is to model information with a likelihood function embedded in a decision tree describing possible policy options. The value of information is then calculated as a function of information accuracy. The results illustrate how accurate the information must be to have an impact on the choice of policy options. The results also illustrate situations in which additional research can have a negative value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号