首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 126 毫秒
1.
Crossover designs are popular in early phases of clinical trials and in bioavailability and bioequivalence studies. Assessment of carryover effects, in addition to the treatment effects, is a critical issue in crossover trails. The observed data from a crossover trial can be incomplete because of potential dropouts. A joint model for analyzing incomplete data from crossover trials is proposed in this article; the model includes a measurement model and an outcome dependent informative model for the dropout process. The informative-dropout model is compared with the ignorable-dropout model as specific cases of the latter are nested subcases of the proposed joint model. Markov chain sampling methods are used for Bayesian analysis of this model. The joint model is used to analyze depression score data from a clinical trial in women with late luteal phase dysphoric disorder. Interestingly, carryover effect is found to have a strong effect in the informative dropout model, but it is less significant when dropout is considered ignorable.  相似文献   

2.
For the analysis of binary data, various deterministic models have been proposed, which are generally simpler to fit and easier to understand than probabilistic models. We claim that corresponding to any deterministic model is an implicit stochastic model in which the deterministic model fits imperfectly, with errors occurring at random. In the context of binary data, we consider a model in which the probability of error depends on the model prediction. We show how to fit this model using a stochastic modification of deterministic optimization schemes.The advantages of fitting the stochastic model explicitly (rather than implicitly, by simply fitting a deterministic model and accepting the occurrence of errors) include quantification of uncertainty in the deterministic model’s parameter estimates, better estimation of the true model error rate, and the ability to check the fit of the model nontrivially. We illustrate this with a simple theoretical example of item response data and with empirical examples from archeology and the psychology of choice.  相似文献   

3.
Frequently in the analysis of survival data, survival times within the same group are correlated due to unobserved co-variates. One way these co-variates can be included in the model is as frailties. These frailty random block effects generate dependency between the survival times of the individuals which are conditionally independent given the frailty. Using a conditional proportional hazards model, in conjunction with the frailty, a whole new family of models is introduced. By considering a gamma frailty model, often the issue is to find an appropriate model for the baseline hazard function. In this paper a flexible baseline hazard model based on a correlated prior process is proposed and is compared with a standard Weibull model. Several model diagnostics methods are developed and model comparison is made using recently developed Bayesian model selection criteria. The above methodologies are applied to the McGilchrist and Aisbett (1991) kidney infection data and the analysis is performed using Markov Chain Monte Carlo methods. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

4.
The two-part model and Heckman's sample selection model are often used in economic studies which involve analyzing the demand for limited variables. This study proposed a simultaneous equation model (SEM) and used the expectation-maximization algorithm to obtain the maximum likelihood estimate. We then constructed a simulation to compare the performance of estimates of price elasticity using SEM with those estimates from the two-part model and the sample selection model. The simulation shows that the estimates of price elasticity by SEM are more precise than those by the sample selection model and the two-part model when the model includes limited independent variables. Finally, we analyzed a real example of cigarette consumption as an application. We found an increase in cigarette price associated with a decrease in both the propensity to consume cigarettes and the amount actually consumed.  相似文献   

5.
 在纵向数据研究中,混合效应模型的随机误差通常采用正态性假设。而诸如病毒载量和CD4细胞数目等病毒性数据通常呈现偏斜性,因此正态性假设可能影响模型结果甚至导致错误的结论。在HIV动力学研究中,病毒响应值往往与协变量相关,且协变量的测量值通常存在误差,为此论文中联立协变量过程建立具有偏正态分布的非线性混合效应联合模型,并用贝叶斯推断方法估计模型的参数。由于协变量能够解释个体内的部分变化,因此协变量过程的模型选择对病毒载量的拟合效果有重要的影响。该文提出了一次移动平均模型作为协变量过程的改进模型,比较后发现当协变量采用移动平均模型时,病毒载量模型的拟合效果更好。该结果对协变量模型的研究具有重要的指导意义。  相似文献   

6.
ABSTRACT

This paper presents a modified skew-normal (SN) model that contains the normal model as a special case. Unlike the usual SN model, the Fisher information matrix of the proposed model is always non-singular. Despite of this desirable property for the regular asymptotic inference, as with the SN model, in the considered model the divergence of the maximum likelihood estimator (MLE) of the skewness parameter may occur with positive probability in samples with moderate sizes. As a solution to this problem, a modified score function is used for the estimation of the skewness parameter. It is proved that the modified MLE is always finite. The quasi-likelihood approach is considered to build confidence intervals. When the model includes location and scale parameters, the proposed method is combined with the unmodified maximum likelihood estimates of these parameters.  相似文献   

7.
This article investigates the asymptotic properties of coefficient estimators in the panel cointegration model with a time trend. We find that the bias of OLS estimator for the slope coefficient in the panel cointegration model with a time trend is distinct from that in the panel cointegration model without a time trend. Meanwhile, the variance of the limiting distribution for the slope coefficient is larger in the panel cointegration model with a time trend than without a time trend.  相似文献   

8.
We analyse a flexible parametric estimation technique for a competing risks (CR) model with unobserved heterogeneity, by extending a local mixed proportional hazard single risk model for continuous duration time to a local mixture CR (LMCR) model for discrete duration time. The state-specific local hazard function for the LMCR model is per definition a valid density function if we have either one or two destination states. We conduct Monte Carlo experiments to compare the estimated parameters of the LMCR model, and to compare the estimated parameters of a CR model based on a Heckman–Singer-type (HS-type) technique, with the data-generating process parameters. The Monte Carlo results show that the LMCR model performs better or at least as good as the HS-type model with respect to the estimated structure parameters in most of the cases, but relatively poorer with respect to the estimated duration-dependence parameters.  相似文献   

9.
A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical model comparison, the question to ask is which likelihood should be chosen for a given model. The authors employ a Bayesian framework to shed some light on qualitative comparison of the likelihoods associated with a given model. They connect these results to related issues of the effective number of parameters, penalty function, and consistent definition of a likelihood‐based model choice criterion. In particular, with a two‐stage model they show that, very generally, regardless of hyperprior specification or how much data is collected or what the realized values are, a priori, the first‐stage likelihood is expected to be smaller than the marginal likelihood. A posteriori, these expectations are reversed and the disparities worsen with increasing sample size and with increasing number of model levels.  相似文献   

10.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

11.
This paper proposes a linear mixed model (LMM) with spatial effects, trend, seasonality and outliers for spatio-temporal time series data. A linear trend, dummy variables for seasonality, a binary method for outliers and a multivariate conditional autoregressive (MCAR) model for spatial effects are adopted. A Bayesian method using Gibbs sampling in Markov Chain Monte Carlo is used for parameter estimation. The proposed model is applied to forecast rice and cassava yields, a spatio-temporal data type, in Thailand. The data have been extracted from the Office of Agricultural Economics, Ministry of Agriculture and Cooperatives of Thailand. The proposed model is compared with our previous model, an LMM with MCAR, and a log transformed LMM with MCAR. We found that the proposed model is the most appropriate, using the mean absolute error criterion. It fits the data very well in both the fitting part and the validation part for both rice and cassava. Therefore, it is recommended to be a primary model for forecasting these types of spatio-temporal time series data.  相似文献   

12.
多水平模型及静态面板数据模型的比较研究   总被引:1,自引:0,他引:1  
对两水平模型与静态面板数据模型进行对比分析:多水平模型主要用于分析具有层次结构的统计数据,面板数据模型是针对面板数据而提出的一种应用广泛的计量经济模型。面板数据可以看成是具有截面水平与时间水平的两层数据,两水平模型也能对面板数据进行分析,在一定条件下具有一定的相似性。因此,提出多水平的静态面板数据模型,为分析具有多个层次结构的面板数据提供分析工具。  相似文献   

13.
We consider the problem of model selection based on quantile analysis and with unknown parameters estimated using quantile leasts squares. We propose a model selection test for the null hypothesis that the competing models are equivalent against the alternative hypothesis that one model is closer to the true model. We follow with two applications of the proposed model selection test. The first application is in model selection for time series with non-normal innovations. The second application is in model selection in the NoVas method, short for normalizing and variance stabilizing transformation, forecast. A set of simulation results also lends strong support to the results presented in the paper.  相似文献   

14.
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection.  相似文献   

15.
This paper introduces the Dogit ordered generalized extreme value (DOGEV) model, for handling discrete variables that are ordered and heterogeneous. In particular, the DOGEV model can be applied to questionnaire responses on questions allowing a discrete set of ordered possible responses, where there is a preference for particular responses and possibly multiple modes in the data. The DOGEV model combines a model for choice set generation with the ordered generalized extreme value model. The paper illustrates the model using two empirical examples: a model of inflationary expectations and a model for students' evaluations of teaching.  相似文献   

16.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
This paper describes the various stages in building a statistical model to predict temperatures in the core of a reactor, and compares the benefits of this model with those of a physical model. We give a brief background to this study and the applications of the model to rapid online monitoring and safe operation of the reactor. We describe the methods, of correlation and two dimensional spectral analysis, which we use to identify the effects that are incorporated in a spatial regression model for the measured temperatures. These effects are related to the age of the reactor fuel and the spatial geometry of the reactor. A remaining component of the temperature variation is a slowly varying temperature surface modelled by smooth functions with constrained coefficients. We assess the accuracy of the model for interpolating temperatures throughout the reactor, when measurements are available only at a reduced set of spatial locations, as is the case in most reactors. Further possible improvements to the model are discussed.  相似文献   

18.
The fundamental difficulty with inference in nontrivial extrapolation where model selection is involved from a rich space of models is that any model estimated in one regime used for decision making in another is fundamentally confounded with disruptive alternatives. These are alternative models which if true would support a diametrically opposed action from the one the estimated model supports. One strategy to support extrapolation and reduce arbitrary fitting and confounding is to force the model to derive from the same mathematical structure that underlies the substantive science appropriate for the phenomena. Then statistical model fitting follows the form of theory generation in artificial intelligence, with statistical model selection tools and the statistician taking the place of the inference engine.  相似文献   

19.
In nonignorable missing response problems, we study a semiparametric model with unspecified missingness mechanism model and a exponential family model for response conditional density. Even though existing methods are available to estimate the parameters in exponential family, estimation or testing of the missingness mechanism model nonparametrically remains to be an open problem. By defining a “synthesis" density involving the unknown missingness mechanism model and the known baseline “carrier" density in the exponential family model, we treat this “synthesis" density as a legitimate one with biased sampling version. We develop maximum pseudo likelihood estimation procedures and the resultant estimators are consistent and asymptotically normal. Since the “synthesis" cumulative distribution is a functional of the missingness mechanism model and the known carrier density, proposed method can be used to test the correctness of the missingness mechanism model nonparametrically andindirectly. Simulation studies and real example demonstrate the proposed methods perform very well.  相似文献   

20.
We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号