首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 785 毫秒
1.
Crossover designs are popular in early phases of clinical trials and in bioavailability and bioequivalence studies. Assessment of carryover effects, in addition to the treatment effects, is a critical issue in crossover trails. The observed data from a crossover trial can be incomplete because of potential dropouts. A joint model for analyzing incomplete data from crossover trials is proposed in this article; the model includes a measurement model and an outcome dependent informative model for the dropout process. The informative-dropout model is compared with the ignorable-dropout model as specific cases of the latter are nested subcases of the proposed joint model. Markov chain sampling methods are used for Bayesian analysis of this model. The joint model is used to analyze depression score data from a clinical trial in women with late luteal phase dysphoric disorder. Interestingly, carryover effect is found to have a strong effect in the informative dropout model, but it is less significant when dropout is considered ignorable.  相似文献   

2.
The Cox proportional hazards model has become the standard model for survival analysis. It is often seen as the null model in that "... explicit excuses are now needed to use different models" (Keiding, Proceedings of the XIXth International Biometric Conference, Cape Town, 1998). However, converging hazards also occur frequently in survival analysis. The Burr model, which may be derived as the marginal from a gamma frailty model, is one commonly used tool to model converging hazards. We outline this approach and introduce a mixed model which extends the Burr model and allows for both proportional and converging hazards. Although a semi-parametric model in its own right, we demonstrate how the mixed model can be derived via a gamma frailty interpretation, suggesting an E-M fitting procedure. We illustrate the modelling techniques using data on survival of hospice patients.  相似文献   

3.
针对GM(1,1)幂模型灰微分方程与白化方程无法匹配的缺陷,以灰微分方程的重构为基础,建立无偏GM(1,1)幂模型。该方法使得差分方程的参数与其在微分方程中对应的参数具有更好的一致性。将无偏GM(1,1)幂模型应用到旅游客源预测中,实例应用结果显示无偏GM(1,1)幂模型预测精度高于GM(1,1)模型。  相似文献   

4.
This paper introduces the Dogit ordered generalized extreme value (DOGEV) model, for handling discrete variables that are ordered and heterogeneous. In particular, the DOGEV model can be applied to questionnaire responses on questions allowing a discrete set of ordered possible responses, where there is a preference for particular responses and possibly multiple modes in the data. The DOGEV model combines a model for choice set generation with the ordered generalized extreme value model. The paper illustrates the model using two empirical examples: a model of inflationary expectations and a model for students' evaluations of teaching.  相似文献   

5.
Summary.  In process characterization the quality of information that is obtained depends directly on the quality of process model. The current quality revolution is now providing a strong stimulus for rethinking and re-evaluating many statistical ideas. Among these are the role of theoretic knowledge and data in statistical inference and some issues in theoretic–empirical modelling. With this concern the paper takes a broad, pragmatic view of statistical inference to include all aspects of model formulation. The estimation of model parameters traditionally assumes that a model has a prespecified known form and takes no account of possible uncertainty regarding model structure. But in practice model structural uncertainty is a fact of life and is likely to be more serious than other sources of uncertainty which have received far more attention. This is true whether the model is specified on subject-matter grounds or when a model is formulated, fitted and checked on the same data set in an iterative interactive way. For that reason novel modelling techniques have been fashioned for reducing model uncertainty. Using available knowledge for theoretic model elaboration the techniques that have been created approximate the exact unknown process model concurrently by accessible theoretic and polynomial empirical functions. The paper examines the effects of uncertainty for hybrid theoretic–empirical models and, for reducing uncertainty, additive and multiplicative methods of model formulation are fashioned. Such modelling techniques have been successfully applied to perfect a steady flow model for an air gauge sensor. Validation of the models elaborated has revealed that the multiplicative modelling approach allows us to attain a satisfactory model with small discrepancy from empirical evidence.  相似文献   

6.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
For the analysis of binary data, various deterministic models have been proposed, which are generally simpler to fit and easier to understand than probabilistic models. We claim that corresponding to any deterministic model is an implicit stochastic model in which the deterministic model fits imperfectly, with errors occurring at random. In the context of binary data, we consider a model in which the probability of error depends on the model prediction. We show how to fit this model using a stochastic modification of deterministic optimization schemes.The advantages of fitting the stochastic model explicitly (rather than implicitly, by simply fitting a deterministic model and accepting the occurrence of errors) include quantification of uncertainty in the deterministic model’s parameter estimates, better estimation of the true model error rate, and the ability to check the fit of the model nontrivially. We illustrate this with a simple theoretical example of item response data and with empirical examples from archeology and the psychology of choice.  相似文献   

8.
We wish to model pulse wave velocity (PWV) as a function of longitudinal measurements of pulse pressure (PP) at the same and prior visits at which the PWV is measured. A number of approaches are compared. First, we use the PP at the same visit as the PWV in a linear regression model. In addition, we use the average of all available PPs as the explanatory variable in a linear regression model. Next, a two-stage process is applied. The longitudinal PP is modeled using a linear mixed-effects model. This modeled PP is used in the regression model to describe PWV. An approach for using the longitudinal PP data is to obtain a measure of the cumulative burden, the area under the PP curve. This area under the curve is used as an explanatory variable to model PWV. Finally, a joint Bayesian model is constructed similar to the two-stage model.  相似文献   

9.
 在纵向数据研究中,混合效应模型的随机误差通常采用正态性假设。而诸如病毒载量和CD4细胞数目等病毒性数据通常呈现偏斜性,因此正态性假设可能影响模型结果甚至导致错误的结论。在HIV动力学研究中,病毒响应值往往与协变量相关,且协变量的测量值通常存在误差,为此论文中联立协变量过程建立具有偏正态分布的非线性混合效应联合模型,并用贝叶斯推断方法估计模型的参数。由于协变量能够解释个体内的部分变化,因此协变量过程的模型选择对病毒载量的拟合效果有重要的影响。该文提出了一次移动平均模型作为协变量过程的改进模型,比较后发现当协变量采用移动平均模型时,病毒载量模型的拟合效果更好。该结果对协变量模型的研究具有重要的指导意义。  相似文献   

10.
We consider the problem of selecting a regression model from a large class of possible models in the case where no true model is believed to exist. In practice few statisticians, or scientists who employ statistical methods, believe that a "true" model exists, but nonetheless they seek to select a model as a proxy from which they want to predict. Unlike much of the recent work in this area we address this problem explicitly. We develop Bayesian predictive model selection techniques when proper conjugate priors are used and obtain an easily computed expression for the model selection criterion. We also derive expressions for updating the value of the statistic when a predictor is dropped from the model and apply this approach to a large well-known data set.  相似文献   

11.
Frequently in the analysis of survival data, survival times within the same group are correlated due to unobserved co-variates. One way these co-variates can be included in the model is as frailties. These frailty random block effects generate dependency between the survival times of the individuals which are conditionally independent given the frailty. Using a conditional proportional hazards model, in conjunction with the frailty, a whole new family of models is introduced. By considering a gamma frailty model, often the issue is to find an appropriate model for the baseline hazard function. In this paper a flexible baseline hazard model based on a correlated prior process is proposed and is compared with a standard Weibull model. Several model diagnostics methods are developed and model comparison is made using recently developed Bayesian model selection criteria. The above methodologies are applied to the McGilchrist and Aisbett (1991) kidney infection data and the analysis is performed using Markov Chain Monte Carlo methods. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
This paper describes the various stages in building a statistical model to predict temperatures in the core of a reactor, and compares the benefits of this model with those of a physical model. We give a brief background to this study and the applications of the model to rapid online monitoring and safe operation of the reactor. We describe the methods, of correlation and two dimensional spectral analysis, which we use to identify the effects that are incorporated in a spatial regression model for the measured temperatures. These effects are related to the age of the reactor fuel and the spatial geometry of the reactor. A remaining component of the temperature variation is a slowly varying temperature surface modelled by smooth functions with constrained coefficients. We assess the accuracy of the model for interpolating temperatures throughout the reactor, when measurements are available only at a reduced set of spatial locations, as is the case in most reactors. Further possible improvements to the model are discussed.  相似文献   

13.
Social network data represent the interactions between a group of social actors. Interactions between colleagues and friendship networks are typical examples of such data.The latent space model for social network data locates each actor in a network in a latent (social) space and models the probability of an interaction between two actors as a function of their locations. The latent position cluster model extends the latent space model to deal with network data in which clusters of actors exist — actor locations are drawn from a finite mixture model, each component of which represents a cluster of actors.A mixture of experts model builds on the structure of a mixture model by taking account of both observations and associated covariates when modeling a heterogeneous population. Herein, a mixture of experts extension of the latent position cluster model is developed. The mixture of experts framework allows covariates to enter the latent position cluster model in a number of ways, yielding different model interpretations.Estimates of the model parameters are derived in a Bayesian framework using a Markov Chain Monte Carlo algorithm. The algorithm is generally computationally expensive — surrogate proposal distributions which shadow the target distributions are derived, reducing the computational burden.The methodology is demonstrated through an illustrative example detailing relationships between a group of lawyers in the USA.  相似文献   

14.
The two-part model and Heckman's sample selection model are often used in economic studies which involve analyzing the demand for limited variables. This study proposed a simultaneous equation model (SEM) and used the expectation-maximization algorithm to obtain the maximum likelihood estimate. We then constructed a simulation to compare the performance of estimates of price elasticity using SEM with those estimates from the two-part model and the sample selection model. The simulation shows that the estimates of price elasticity by SEM are more precise than those by the sample selection model and the two-part model when the model includes limited independent variables. Finally, we analyzed a real example of cigarette consumption as an application. We found an increase in cigarette price associated with a decrease in both the propensity to consume cigarettes and the amount actually consumed.  相似文献   

15.
Summary.  We consider a Bayesian forecasting system to predict the dispersal of contamination on a large scale grid in the event of an accidental release of radioactivity. The statistical model is built on a physical model for atmospheric dispersion and transport called MATCH. Our spatiotemporal model is a dynamic linear model where the state parameters are the (essentially, deterministic) predictions of MATCH; the distributions of these are updated sequentially in the light of monitoring data. One of the distinguishing features of the model is that the number of these parameters is very large (typically several hundreds of thousands) and we discuss practical issues arising in its implementation as a realtime model. Our procedures have been checked against a variational approach which is used widely in the atmospheric sciences. The results of the model are applied to test data from a tracer experiment.  相似文献   

16.
In this paper, we propose a model with a Dirichlet process mixture of gamma densities in the bulk part below threshold and a generalized Pareto density in the tail for extreme value estimation. The proposed model is simple and flexible for posterior density estimation and posterior inference for high quantiles. The model works well even for small sample sizes and in the absence of prior information. We evaluate the performance of the proposed model through a simulation study. Finally, the proposed model is applied to a real environmental data.  相似文献   

17.
We introduce a duration model that allows for unobserved cumulative individual-specific shocks, which are likely to be important in explaining variations in duration outcomes, such as length of life and time spent unemployed. The model is also a useful tool in situations where researchers observe a great deal of information about individuals when first interviewed in surveys but little thereafter. We call this model the “increasingly mixed proportional hazard” (IMPH) model. We compare and contrast this model with the mixed proportional hazard (MPH) model, which continues to be the workhorse of applied single-spell duration analysis in economics and the other social sciences. We apply the IMPH model to study the relationships among socioeconomic status, health shocks, and mortality, using 19 waves of data drawn from the German Socio-Economic Panel (SOEP). The IMPH model is found to fit the data statistically better than the MPH model, and unobserved health shocks and socioeconomic status are shown to play powerful roles in predicting longevity.  相似文献   

18.
Model selection methods are important to identify the best approximating model. To identify the best meaningful model, purpose of the model should be clearly pre-stated. The focus of this paper is model selection when the modelling purpose is classification. We propose a new model selection approach designed for logistic regression model selection where main modelling purpose is classification. The method is based on the distance between the two clustering trees. We also question and evaluate the performances of conventional model selection methods based on information theory concepts in determining best logistic regression classifier. An extensive simulation study is used to assess the finite sample performances of the cluster tree based and the information theoretic model selection methods. Simulations are adjusted for whether the true model is in the candidate set or not. Results show that the new approach is highly promising. Finally, they are applied to a real data set to select a binary model as a means of classifying the subjects with respect to their risk of breast cancer.  相似文献   

19.
《统计学通讯:理论与方法》2012,41(16-17):3278-3300
Under complex survey sampling, in particular when selection probabilities depend on the response variable (informative sampling), the sample and population distributions are different, possibly resulting in selection bias. This article is concerned with this problem by fitting two statistical models, namely: the variance components model (a two-stage model) and the fixed effects model (a single-stage model) for one-way analysis of variance, under complex survey design, for example, two-stage sampling, stratification, and unequal probability of selection, etc. Classical theory underlying the use of the two-stage model involves simple random sampling for each of the two stages. In such cases the model in the sample, after sample selection, is the same as model for the population; before sample selection. When the selection probabilities are related to the values of the response variable, standard estimates of the population model parameters may be severely biased, leading possibly to false inference. The idea behind the approach is to extract the model holding for the sample data as a function of the model in the population and of the first order inclusion probabilities. And then fit the sample model, using analysis of variance, maximum likelihood, and pseudo maximum likelihood methods of estimation. The main feature of the proposed techniques is related to their behavior in terms of the informativeness parameter. We also show that the use of the population model that ignores the informative sampling design, yields biased model fitting.  相似文献   

20.
The most natural parametric distribution to consider is the Weibull model because it allows for both the proportional hazard model and accelerated failure time model. In this paper, we propose a new bivariate Weibull regression model based on censored samples with common covariates. There are some interesting biometrical applications which motivate to study bivariate Weibull regression model in this particular situation. We obtain maximum likelihood estimators for the parameters in the model and test the significance of the regression parameters in the model. We present a simulation study based on 1000 samples and also obtain the power of the test statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号