首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
A meta-analysis of a continuous outcome measure may involve missing standard errors. This is not a problem depending on assumptions made about the population standard deviation. Multiple imputation can be used to impute missing values while allowing for uncertainty in the imputation. Markov chain Monte Carlo simulation is a multiple imputation technique for generating posterior predictive distributions for missing data. We present an example of imputing missing variances using WinBUGS. The example highlights the importance of checking model assumptions, whether for missing or observed data.  相似文献   

2.
In this paper, we consider the problem of estimating a single changepoint in a parameter‐driven model. The model – an extension of the Poisson regression model – accounts for serial correlation through a latent process incorporated in its mean function. Emphasis is placed on the changepoint characterization with changes in the parameters of the model. The model is fully implemented within the Bayesian framework. We develop a RJMCMC algorithm for parameter estimation and model determination. The algorithm embeds well‐devised Metropolis–Hastings procedures for estimating the missing values of the latent process through data augmentation and the changepoint. The methodology is illustrated using data on monthly counts of claimants collecting wage loss benefit for injuries in the workplace and an analysis of presidential uses of force in the USA.  相似文献   

3.
In this paper we study the cure rate survival model involving a competitive risk structure with missing categorical covariates. A parametric distribution that can be written as a sequence of one-dimensional conditional distributions is specified for the missing covariates. We consider the missing data at random situation so that the missing covariates may depend only on the observed ones. Parameter estimates are obtained by using the EM algorithm via the method of weights. Extensive simulation studies are conducted and reported to compare estimates efficiency with and without missing data. As expected, the estimation approach taking into consideration the missing covariates presents much better efficiency in terms of mean square errors than the complete case situation. Effects of increasing cured fraction and censored observations are also reported. We demonstrate the proposed methodology with two real data sets. One involved the length of time to obtain a BS degree in Statistics, and another about the time to breast cancer recurrence.  相似文献   

4.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

5.
A common occurrence in clinical trials with a survival end point is missing covariate data. With ignorably missing covariate data, Lipsitz and Ibrahim proposed a set of estimating equations to estimate the parameters of Cox's proportional hazards model. They proposed to obtain parameter estimates via a Monte Carlo EM algorithm. We extend those results to non-ignorably missing covariate data. We present a clinical trials example with three partially observed laboratory markers which are used as covariates to predict survival.  相似文献   

6.
Latent Markov models (LMMs) are widely used in the analysis of heterogeneous longitudinal data. However, most existing LMMs are developed in fully observed data without missing entries. The main objective of this study is to develop a Bayesian approach for analyzing the LMMs with non-ignorable missing data. Bayesian methods for estimation and model comparison are discussed. The empirical performance of the proposed methodology is evaluated through simulation studies. An application to a data set derived from National Longitudinal Survey of Youth 1997 is presented.  相似文献   

7.
Joint modeling of associated mixed biomarkers in longitudinal studies leads to a better clinical decision by improving the efficiency of parameter estimates. In many clinical studies, the observed time for two biomarkers may not be equivalent and one of the longitudinal responses may have recorded in a longer time than the other one. In addition, the response variables may have different missing patterns. In this paper, we propose a new joint model of associated continuous and binary responses by accounting different missing patterns for two longitudinal outcomes. A conditional model for joint modeling of the two responses is used and two shared random effects models are considered for intermittent missingness of two responses. A Bayesian approach using Markov Chain Monte Carlo (MCMC) is adopted for parameter estimation and model implementation. The validation and performance of the proposed model are investigated using some simulation studies. The proposed model is also applied for analyzing a real data set of bariatric surgery.  相似文献   

8.
In this paper we consider the impact of both missing data and measurement errors on a longitudinal analysis of participation in higher education in Australia. We develop a general method for handling both discrete and continuous measurement errors that also allows for the incorporation of missing values and random effects in both binary and continuous response multilevel models. Measurement errors are allowed to be mutually dependent and their distribution may depend on further covariates. We show that our methodology works via two simple simulation studies. We then consider the impact of our measurement error assumptions on the analysis of the real data set.  相似文献   

9.
We propose methods for Bayesian inference for missing covariate data with a novel class of semi-parametric survival models with a cure fraction. We allow the missing covariates to be either categorical or continuous and specify a parametric distribution for the covariates that is written as a sequence of one dimensional conditional distributions. We assume that the missing covariates are missing at random (MAR) throughout. We propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. The proposed class of priors are shown to be useful in recovering information on the missing covariates especially in situations where the missing data fraction is large. Properties of the proposed prior and resulting posterior distributions are examined. Also, model checking techniques are proposed for sensitivity analyses and for checking the goodness of fit of a particular model. Specifically, we extend the Conditional Predictive Ordinate (CPO) statistic to assess goodness of fit in the presence of missing covariate data. Computational techniques using the Gibbs sampler are implemented. A real data set involving a melanoma cancer clinical trial is examined to demonstrate the methodology.  相似文献   

10.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

11.
In longitudinal studies, nonlinear mixed-effects models have been widely applied to describe the intra- and the inter-subject variations in data. The inter-subject variation usually receives great attention and it may be partially explained by time-dependent covariates. However, some covariates may be measured with substantial errors and may contain missing values. We proposed a multiple imputation method, implemented by a Markov Chain Monte-Carlo method along with Gibbs sampler, to address the covariate measurement errors and missing data in nonlinear mixed-effects models. The multiple imputation method is illustrated in a real data example. Simulation studies show that the multiple imputation method outperforms the commonly used naive methods.  相似文献   

12.
In this paper, we propose a Bayesian partition modeling for lifetime data in the presence of a cure fraction by considering a local structure generated by a tessellation which depends on covariates. In this modeling we include information of nominal qualitative variables with more than two categories or ordinal qualitative variables. The proposed modeling is based on a promotion time cure model structure but assuming that the number of competing causes follows a geometric distribution. It is an alternative modeling strategy to the conventional survival regression modeling generally used for modeling lifetime data in the presence of a cure fraction, which models the cure fraction through a (generalized) linear model of the covariates. An advantage of our approach is its ability to capture the effects of covariates in a local structure. The flexibility of having a local structure is crucial to capture local effects and features of the data. The modeling is illustrated on two real melanoma data sets.  相似文献   

13.
In practice, survival data are often collected over geographical regions. Shared spatial frailty models have been used to model spatial variation in survival times, which are often implemented using the Bayesian Markov chain Monte Carlo method. However, this method comes at the price of slow mixing rates and heavy computational cost, which may render it impractical for data-intensive application. Alternatively, a frailty model assuming an independent and identically distributed (iid) random effect can be easily and efficiently implemented. Therefore, we used simulations to assess the bias and efficiency loss in the estimated parameters, if residual spatial correlation is present but using an iid random effect. Our simulations indicate that a shared frailty model with an iid random effect can estimate the regression coefficients reasonably well, even with residual spatial correlation present, when the percentage of censoring is not too high and the number of clusters and cluster size are not too low. Therefore, if the primary goal is to assess the covariate effects, one may choose the frailty model with an iid random effect; whereas if the goal is to predict the hazard, additional care needs to be given due to the efficiency loss in the parameter(s) for the baseline hazard.  相似文献   

14.
ABSTRACT

Inference for epidemic parameters can be challenging, in part due to data that are intrinsically stochastic and tend to be observed by means of discrete-time sampling, which are limited in their completeness. The problem is particularly acute when the likelihood of the data is computationally intractable. Consequently, standard statistical techniques can become too complicated to implement effectively. In this work, we develop a powerful method for Bayesian paradigm for susceptible–infected–removed stochastic epidemic models via data-augmented Markov Chain Monte Carlo. This technique samples all missing values as well as the model parameters, where the missing values and parameters are treated as random variables. These routines are based on the approximation of the discrete-time epidemic by diffusion process. We illustrate our techniques using simulated epidemics and finally we apply them to the real data of Eyam plague.  相似文献   

15.
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time‐in‐study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference‐based imputations, where information from reference arms can be borrowed to impute post‐discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time‐varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Several survival regression models have been developed to assess the effects of covariates on failure times. In various settings, including surveys, clinical trials and epidemiological studies, missing data may often occur due to incomplete covariate data. Most existing methods for lifetime data are based on the assumption of missing at random (MAR) covariates. However, in many substantive applications, it is important to assess the sensitivity of key model inferences to the MAR assumption. The index of sensitivity to non-ignorability (ISNI) is a local sensitivity tool to measure the potential sensitivity of key model parameters to small departures from the ignorability assumption, needless of estimating a complicated non-ignorable model. We extend this sensitivity index to evaluate the impact of a covariate that is potentially missing, not at random in survival analysis, using parametric survival models. The approach will be applied to investigate the impact of missing tumor grade on post-surgical mortality outcomes in individuals with pancreas-head cancer in the Surveillance, Epidemiology, and End Results data set. For patients suffering from cancer, tumor grade is an important risk factor. Many individuals in these data with pancreas-head cancer have missing tumor grade information. Our ISNI analysis shows that the magnitude of effect for most covariates (with significant effect on the survival time distribution), specifically surgery and tumor grade as some important risk factors in cancer studies, highly depends on the missing mechanism assumption of the tumor grade. Also a simulation study is conducted to evaluate the performance of the proposed index in detecting sensitivity of key model parameters.  相似文献   

17.
Missing data, a common but challenging issue in most studies, may lead to biased and inefficient inferences if handled inappropriately. As a natural and powerful way for dealing with missing data, Bayesian approach has received much attention in the literature. This paper reviews the recent developments and applications of Bayesian methods for dealing with ignorable and non-ignorable missing data. We firstly introduce missing data mechanisms and Bayesian framework for dealing with missing data, and then introduce missing data models under ignorable and non-ignorable missing data circumstances based on the literature. After that, important issues of Bayesian inference, including prior construction, posterior computation, model comparison and sensitivity analysis, are discussed. Finally, several future issues that deserve further research are summarized and concluded.  相似文献   

18.
The multiple imputation technique has proven to be a useful tool in missing data analysis. We propose a Markov chain Monte Carlo method to conduct multiple imputation for incomplete correlated ordinal data using the multivariate probit model. We conduct a thorough simulation study to compare the performance of our proposed method with two available imputation methods – multivariate normal-based and chain equation methods for various missing data scenarios. For illustration, we present an application using the data from the smoking cessation treatment study for low-income community corrections smokers.  相似文献   

19.
Summary.  The main statistical problem in many epidemiological studies which involve repeated measurements of surrogate markers is the frequent occurrence of missing data. Standard likelihood-based approaches like the linear random-effects model fail to give unbiased estimates when data are non-ignorably missing. In human immunodeficiency virus (HIV) type 1 infection, two markers which have been widely used to track progression of the disease are CD4 cell counts and HIV–ribonucleic acid (RNA) viral load levels. Repeated measurements of these markers tend to be informatively censored, which is a special case of non-ignorable missingness. In such cases, we need to apply methods that jointly model the observed data and the missingness process. Despite their high correlation, longitudinal data of these markers have been analysed independently by using mainly random-effects models. Touloumi and co-workers have proposed a model termed the joint multivariate random-effects model which combines a linear random-effects model for the underlying pattern of the marker with a log-normal survival model for the drop-out process. We extend the joint multivariate random-effects model to model simultaneously the CD4 cell and viral load data while adjusting for informative drop-outs due to disease progression or death. Estimates of all the model's parameters are obtained by using the restricted iterative generalized least squares method or a modified version of it using the EM algorithm as a nested algorithm in the case of censored survival data taking also into account non-linearity in the HIV–RNA trend. The method proposed is evaluated and compared with simpler approaches in a simulation study. Finally the method is applied to a subset of the data from the 'Concerted action on seroconversion to AIDS and death in Europe' study.  相似文献   

20.
Cell counts in contingency tables can be smoothed using loglinear models. Recently, sampling-based methods such as Markov chain Monte Carlo (MCMC) have been introduced, making it possible to sample from posterior distributions. The novelty of the approach presented here is that all conditional distributions can be specified directly, so that straight-forward Gibbs sampling is possible. Thus, the model is constructed in a way that makes burn-in and checking convergence a relatively minor issue. The emphasis of this paper is on smoothing cell counts in contingency tables, and not so much on estimation of regression parameters. Therefore, the prior distribution consists of two stages. We rely on a normal nonconjugate prior at the first stage, and a vague prior for hyperparameters at the second stage. The smoothed counts tend to compromise between the observed data and a log-linear model. The methods are demonstrated with a sparse data table taken from a multi-center clinical trial. The research for the first author was supported by Brain Pool program of the Korean Federation of Science and Technology Societies. The research for the second author was partially supported by KOSEF through Statistical Research Center for Complex Systems at Seoul National University.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号