首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Nowadays, Bayesian methods are routinely used for estimating parameters of item response theory (IRT) models. However, the marginal likelihoods are still rarely used for comparing IRT models due to their complexity and a relatively high dimension of the model parameters. In this paper, we review Monte Carlo (MC) methods developed in the literature in recent years and provide a detailed development of how these methods are applied to the IRT models. In particular, we focus on the “best possible” implementation of these MC methods for the IRT models. These MC methods are used to compute the marginal likelihoods under the one-parameter IRT model with the logistic link (1PL model) and the two-parameter logistic IRT model (2PL model) for a real English Examination dataset. We further use the widely applicable information criterion (WAIC) and deviance information criterion (DIC) to compare the 1PL model and the 2PL model. The 2PL model is favored by all of these three Bayesian model comparison criteria for the English Examination data.  相似文献   

2.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

3.
We consider importance sampling as well as other properly weighted samples with respect to a target distribution ππ from a different point of view. By considering the associated weights as sojourn times until the next jump, we define appropriate jump processes. When the original sample sequence forms an ergodic Markov chain, the associated jump process is an ergodic semi-Markov process with stationary distribution ππ. In this respect, properly weighted samples behave very similarly to standard Markov chain Monte Carlo (MCMC) schemes in that they exhibit convergence to the target distribution as well. Indeed, some standard MCMC procedures like the Metropolis–Hastings algorithm are included in this context. Moreover, when the samples are independent and the mean weight is bounded above, we describe a slight modification in order to achieve exact (weighted) samples from the target distribution.  相似文献   

4.
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.  相似文献   

5.
In this paper, we deal with bias reduction techniques for heavy tails, trying to improve mainly upon the performance of classical high quantile estimators. High quantiles depend strongly on the tail index γγ, for which new classes of reduced-bias estimators have recently been introduced, where the second-order parameters in the bias are estimated at a level k1k1 of a larger order than the level k at which the tail index is estimated. Doing this, it was seen that the asymptotic variance of the new estimators could be kept equal to the one of the popular Hill estimators. In a similar way, we now introduce new classes of tail index and associated high quantile estimators, with an asymptotic mean squared error smaller than that of the classical ones for all k in a large class of heavy-tailed models. We derive their asymptotic distributional properties and compare them with those of alternative estimators. Next to that, an illustration of the finite sample behavior of the estimators is also provided through a Monte Carlo simulation study and the application to a set of real data in the field of insurance.  相似文献   

6.
We consider inference in randomized longitudinal studies with missing data that is generated by skipped clinic visits and loss to follow-up. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and partial ignorability for the intermittent missingness. We posit an exponential tilt model that links non-identifiable distributions and distributions identified under partial ignorability. This exponential tilt model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated by, and applied to, data from the Breast Cancer Prevention Trial.  相似文献   

7.
In this paper, we propose a frailty model for statistical inference in the case where we are faced with arbitrarily censored and truncated data. Our results extend those of Alioum and Commenges (1996), who developed a method of fitting a proportional hazards model to data of this kind. We discuss the identifiability of the regression coefficients involved in the model which are the parameters of interest, as well as the identifiability of the baseline cumulative hazard function of the model which plays the role of the infinite dimensional nuisance parameter. We illustrate our method with the use of simulated data as well as with a set of real data on transfusion-related AIDS.  相似文献   

8.
The broken stick model is a model of the abundance of species in a habitat, and it has been widely extended. In this paper, we present results from exploratory data analysis of this model. To obtain some of the statistics, we formulate the broken stick model as a probability distribution function based on the same model, and we provide an expression for the cumulative distribution function, which is needed to obtain the results from exploratory data analysis. The inequalities we present are useful in ecological studies that apply broken stick models. These results are also useful for testing the goodness of fit of the broken stick model as an alternative to the chi square test, which has often been the main test used. Therefore, these results may be used in several alternative and complementary ways for testing the goodness of fit of the broken stick model.  相似文献   

9.
The topic of heterogeneity in the analysis of recurrent event data has received considerable attention recent times. Frailty models are widely employed in such situations as they allow us to model the heterogeneity through common random effect. In this paper, we introduce a shared frailty model for gap time distributions of recurrent events with multiple causes. The parameters of the model are estimated using EM algorithm. An extensive simulation study is used to assess the performance of the method. Finally, we apply the proposed model to a real-life data.  相似文献   

10.
In this paper we discuss some distributional properties of quadratic functionals of the ordinary and fractional Brownian motions (fBms). As far as the ordinary Brownian motion (Bm) is concerned, those properties have been established extensively. A transition from the Bm to the fBm is not straightforward. Some difficulties associated with dealing with the fBm are explained, and a way to solving the problem is indicated, and some conjectures are given.  相似文献   

11.
The negative binomial (NB) model and the generalized Poisson (GP) model are common alternatives to Poisson models when overdispersion is present in the data. Having accounted for initial overdispersion, we may require further investigation as to whether there is evidence for zero-inflation in the data. Two score statistics are derived from the GP model for testing zero-inflation. These statistics, unlike Wald-type test statistics, do not require that we fit the more complex zero-inflated overdispersed models to evaluate zero-inflation. A simulation study illustrates that the developed score statistics reasonably follow a χ2 distribution and maintain the nominal level. Extensive simulation results also indicate the power behavior is different for including a continuous variable than a binary variable in the zero-inflation (ZI) part of the model. These differences are the basis from which suggestions are provided for real data analysis. Two practical examples are presented in this article. Results from these examples along with practical experience lead us to suggest performing the developed score test before fitting a zero-inflated NB model to the data.  相似文献   

12.
In this paper, we discuss the bivariate Birnbaum-Saunders accelerated lifetime model, in which we have modeled the dependence structure of bivariate survival data through the use of frailty models. Specifically, we propose the bivariate model Birnbaum-Saunders with the following frailty distributions: gamma, positive stable and logarithmic series. We present a study of inference and diagnostic analysis for the proposed model, more concisely, are proposed a diagnostic analysis based in local influence and residual analysis to assess the fit model, as well as, to detect influential observations. In this regard, we derived the normal curvatures of local influence under different perturbation schemes and we performed some simulation studies for assessing the potential of residuals to detect misspecification in the systematic component, the presence in the stochastic component of the model and to detect outliers. Finally, we apply the methodology studied to real data set from recurrence in times of infections of 38 kidney patients using a portable dialysis machine, we analyzed these data considering independence within the pairs and using the bivariate Birnbaum-Saunders accelerated lifetime model, so that we could make a comparison and verify the importance of modeling dependence within the times of infection associated with the same patient.  相似文献   

13.
This paper explores the utility of different approaches for modeling longitudinal count data with dropouts arising from a clinical study for the treatment of actinic keratosis lesions on the face and balding scalp. A feature of these data is that as the disease for subjects on the active arm improves their data show larger dispersion compared with those on the vehicle, exhibiting an over‐dispersion relative to the Poisson distribution. After fitting the marginal (or population averaged) model using the generalized estimating equation (GEE), we note that inferences from such a model might be biased as dropouts are treatment related. Then, we consider using a weighted GEE (WGEE) where each subject's contribution to the analysis is weighted inversely by the subject's probability of dropout. Based on the model findings, we argue that the WGEE might not address the concerns about the impact of dropouts on the efficacy findings when dropouts are treatment related. As an alternative, we consider likelihood‐based inference where random effects are added to the model to allow for heterogeneity across subjects. Finally, we consider a transition model where, unlike the previous approaches that model the log‐link function of the mean response, we model the subject's actual lesion counts. This model is an extension of the Poisson autoregressive model of order 1, where the autoregressive parameter is taken to be a function of treatment as well as other covariates to induce different dispersions and correlations for the two treatment arms. We conclude with a discussion about model selection. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we present a parsimonious model for the analysis of underreported Poisson count data. In contrast to previously developed methods, we are able to derive analytic expressions for the key marginal posterior distributions that are of interest. The usefulness of this model is explored via a re-examination of previously analysed data covering the purchasing of port wine (Ramos, 1999).  相似文献   

15.
We introduce an extension to the mixture of linear regressions model where changepoints are present. Such a model provides greater flexibility over a standard changepoint regression model if the data are believed to not only have changepoints present, but are also believed to belong to two or more unobservable categories. This model can provide additional insight into data that are already modeled using mixtures of regressions, but where the presence of changepoints has not yet been investigated. After discussing the mixture of regressions with changepoints model, we then develop an Expectation/Conditional Maximization (ECM) algorithm for maximum likelihood estimation. Two simulation studies illustrate the performance of our ECM algorithm and we analyze a real dataset.  相似文献   

16.
Functional regression models that relate functional covariates to a scalar response are becoming more common due to the availability of functional data and computational advances. We introduce a functional nonlinear model with a scalar response where the true parameter curve is monotone. Using the Newton-Raphson method within a backfitting procedure, we discuss a penalized least squares criterion for fitting the functional nonlinear model with the smoothing parameter selected using generalized cross validation. Connections between a nonlinear mixed effects model and our functional nonlinear model are discussed, thereby providing an additional model fitting procedure using restricted maximum likelihood for smoothing parameter selection. Simulated relative efficiency gains provided by a monotone parameter curve estimator relative to an unconstrained parameter curve estimator are presented. In addition, we provide an application of our model with data from ozonesonde measurements of stratospheric ozone in which the measurements are biased as a function of altitude.  相似文献   

17.
Classical techniques for modeling numerical data associated to a regular grid have been widely developed in the literature. When a trigonometric model for the data is considered, it is possible to use the corresponding least squares (classical) estimators, but when the data are not observed on a regular grid, these estimators do not show appropriate properties. In this article we propose a novel way to model data that is not observed on a regular grid, and we establish a practical criterion, based on the mean squared error (MSE), to objectively decide which estimator should be used in each case: the inappropriate classical or the new unbiased estimator, which has greater variance. Jackknife and cross-validation techniques are used to follow a similar criterion in practice, when the MSE is not known. Finally, we present an application of the methodology to univariate and bivariate data.  相似文献   

18.
In this article, we develop a model to study treatment, period, carryover, and other applicable effects in a crossover design with a time-to-event response variable. Because time-to-event outcomes on different treatment regimens within the crossover design are correlated for an individual, we adopt a proportional hazards frailty model. If the frailty is assumed to have a gamma distribution, and the hazard rates are piecewise constant, then the likelihood function can be determined via closed-form expressions. We illustrate the methodology via an application to a data set from an asthma clinical trial and run simulations that investigate sensitivity of the model to data generated from different distributions.  相似文献   

19.
We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.  相似文献   

20.
Zero-inflated data are more frequent when the data represent counts. However, there are practical situations in which continuous data contain an excess of zeros. In these cases, the zero-inflated Poisson, binomial or negative binomial models are not suitable. In order to reduce this gap, we propose the zero-spiked gamma-Weibull (ZSGW) model by mixing a distribution which is degenerate at zero with the gamma-Weibull distribution, which has positive support. The model attempts to estimate simultaneously the effects of explanatory variables on the response variable and the zero-spiked. We consider a frequentist analysis and a non-parametric bootstrap for estimating the parameters of the ZSGW regression model. We derive the appropriate matrices for assessing local influence on the model parameters. We illustrate the performance of the proposed regression model by means of a real data set (copaiba oil resin production) from a study carried out at the Department of Forest Science of the Luiz de Queiroz School of Agriculture, University of São Paulo. Based on the ZSGW regression model, we determine the explanatory variables that can influence the excess of zeros of the resin oil production and identify influential observations. We also prove empirically that the proposed regression model can be superior to the zero-adjusted inverse Gaussian regression model to fit zero-inflated positive continuous data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号