首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The analysis of recurrent event data in clinical trials presents a number of difficulties. The statistician is faced with issues of event dependency, composite endpoints, unbalanced follow‐up times and informative dropout. It is not unusual, therefore, for statisticians charged with responsibility for providing reliable and valid analyses to need to derive new methods specific to the clinical indication under investigation. One method is proposed that appears to have possible advantages over those that are often used in the analysis of recurrent event data in clinical trials. Based on an approach that counts periods of time with events instead of single event counts, the proposed method makes an adjustment for patient time on study and incorporates heterogeneity by estimating an individual per‐patient risk of experiencing a morbid event. Monte Carlo simulations demonstrate that, with use of a real clinical study data, the proposed method consistently outperforms other measures of morbidity. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time‐in‐study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference‐based imputations, where information from reference arms can be borrowed to impute post‐discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time‐varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

4.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

5.
The statistical properties of control charts are usually evaluated under the assumption that the observations from the process are independent. For many processes however, observations which are closely spaced in time will be correlated. This paper considers EWMA and CUSUM control charts for the process mean when the observations are from an AR(1) process with additional random error. This simple model may be a reasonable model for many processes encountered in practice. The ARL and steady state ARL of the EWMA and CUSUM charts are evaluated numerically using an integral equation approach and a Markov chain approach. The numerical results show that correlation can have a significant effect on the properties of these charts. Tables are given to aid in the design of these charts when the observations follow the assumed model.  相似文献   

6.
Informative dropout is a vexing problem for any biomedical study. Most existing statistical methods attempt to correct estimation bias related to this phenomenon by specifying unverifiable assumptions about the dropout mechanism. We consider a cohort study in Africa that uses an outreach programme to ascertain the vital status for dropout subjects. These data can be used to identify a number of relevant distributions. However, as only a subset of dropout subjects were followed, vital status ascertainment was incomplete. We use semi‐competing risk methods as our analysis framework to address this specific case where the terminal event is incompletely ascertained and consider various procedures for estimating the marginal distribution of dropout and the marginal and conditional distributions of survival. We also consider model selection and estimation efficiency in our setting. Performance of the proposed methods is demonstrated via simulations, asymptotic study and analysis of the study data.  相似文献   

7.
One difficulty with developing multivariate attribute control charts is the lack of the related joint distribution. So, if it would be possible to generate the joint distribution of two (or more) attribute characteristics, then a bivaraite (or multivariate) attribute control chart can be developed based on Types I and II errors. Copula function is a solution to the matter. In this article, applying the copula function approach, we achieve the joint distribution of two correlated zero inflated Poisson (ZIP) distributions. Then, using this joint distribution, we develop a bivaraite control chart which can be used for monitoring correlated rare events. This copula-based bivariate ZIP control chart is compared with the simultaneous use of two separate univariate ZIP control charts. Based on the average run length (ARL) measure, it is shown that the proposed control chart is much better than the simultaneous use of two separate univariate charts. In addition, a real case study related to the environmental air in a sterilization process is investigated to show the applicability of the developed control chart.  相似文献   

8.
This paper discusses regression analysis of panel count data with dependent observation and dropout processes. For the problem, a general mean model is presented that can allow both additive and multiplicative effects of covariates on the underlying point process. In addition, the proportional rates model and the accelerated failure time model are employed to describe possible covariate effects on the observation process and the dropout or follow‐up process, respectively. For estimation of regression parameters, some estimating equation‐based procedures are developed and the asymptotic properties of the proposed estimators are established. In addition, a resampling approach is proposed for estimating a covariance matrix of the proposed estimator and a model checking procedure is also provided. Results from an extensive simulation study indicate that the proposed methodology works well for practical situations, and it is applied to a motivating set of real data.  相似文献   

9.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
An alternative approach for analyzing performance of one-sided Cusum charts with variable sampling intervals (VSI) is proposed. In this approach, a Markov chain with some dummy states is used. By this approach some dynamic performance measures of the VSI Cusum charts, such as the distribution of time to signal and the average time to signal against a change-point, can be determined. Some numerical results are shown, and from these results the dynamic performance of VSI Cusum charts is discussed.  相似文献   

11.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Survival studies usually collect on each participant, both duration until some terminal event and repeated measures of a time-dependent covariate. Such a covariate is referred to as an internal time-dependent covariate. Usually, some subjects drop out of the study before occurence of the terminal event of interest. One may then wish to evaluate the relationship between time to dropout and the internal covariate. The Cox model is a standard framework for that purpose. Here, we address this problem in situations where the value of the covariate at dropout is unobserved. We suggest a joint model which combines a first-order Markov model for the longitudinaly measured covariate with a time-dependent Cox model for the dropout process. We consider maximum likelihood estimation in this model and show how estimation can be carried out via the EM-algorithm. We state that the suggested joint model may have applications in the context of longitudinal data with nonignorable dropout. Indeed, it can be viewed as generalizing Diggle and Kenward's model (1994) to situations where dropout may occur at any point in time and may be censored. Hence we apply both models and compare their results on a data set concerning longitudinal measurements among patients in a cancer clinical trial.  相似文献   

13.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

14.
Traditional multivariate control charts are based upon the assumption that the observations follow a multivariate normal distribution. In many practical applications, however, this supposition may be difficult to verify. In this paper, we use control charts based on robust estimators of location and scale to improve the capability of detection observations out of control under non-normality in the presence of multiple outliers. Concretely, we use a simulation process to analyse the behaviour of the robust alternatives to Hotelling's T 2, which use minimum volume ellipsoidal (MVE) and minimum covariance determinant (MCD) in the presence of observations with a Student's t-distribution. The results show that these robust control charts are good alternatives for small deviations from normality due to the fact that the percentage of out-of-control observations detected for these charts in the Phase II are higher.  相似文献   

15.
The exponentially weighted moving average (EWMA) control charts are widely used in chemical and process industries because of their excellent speed in catching small to moderate shifts in the process target. In usual practice, many data come from a process where the monitoring statistic is non-normally distributed or it follows an unknown probability distribution. This necessitates the use of distribution-free/nonparametric control charts for monitoring the deviations from the process target. In this paper, we integrate the existing EWMA sign chart with the conforming run length chart to propose a new synthetic EWMA (SynEWMA) sign chart for monitoring the process mean. The SynEWMA sign chart encompasses the synthetic sign and EWMA sign charts. Monte Carlo simulations are used to compute the run length profiles of the SynEWMA sign chart. Based on a comprehensive comparison, it turns out that the SynEWMA sign chart is able to perform substantially better than the existing EWMA sign chart. Both real and simulated data sets are used to explain the working and implementation of existing and proposed control charts.  相似文献   

16.
For clinical trials with multiple endpoints, the primary interest is usually to evaluate the relationship of these endpoints and treatment interventions. Studying the correlation of two clinical trial endpoints can also be of interests. For example, the association between patient‐reported outcome and clinically assessed endpoint could answer important research questions and also generate interesting hypothesis for future research. However, it is not straightforward to quantify such association. In this article, we proposed a multiple event approach to profile such association with a temporal correlation function, visualized by a correlation function plot over time with a confidence band. We developed this approach by extending the existing methodology in recurrent event literature. This approach was shown to be generally unbiased and could be a useful tool for data visualization and inference. We demonstrated the use of this method with data from a real clinical trial. Although this approach was developed to evaluate the association between patient‐reported outcome and adverse events, it can also be used to evaluate the association of any two endpoints that can be translated to time‐to‐event endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Control charts have been popularly used as a user-friendly yet technically sophisticated tool to monitor whether a process is in statistical control or not. These charts are basically constructed under the normality assumption. But in many practical situations in real life this normality assumption may be violated. One such non-normal situation is to monitor the process variability from a skewed parent distribution where we propose the use of a Maxwell control chart. We introduce a pivotal quantity for the scale parameter of the Maxwell distribution which follows a gamma distribution. Probability limits and L-sigma limits are studied along with performance measure based on average run length and power curve. To avoid the complexity of future calculations for practitioners, factors for constructing control chart for monitoring the Maxwell parameter are given for different sample sizes and for different false alarm rate. We also provide simulated data to illustrate the Maxwell control chart. Finally, a real life example has been given to show the importance of such a control chart.  相似文献   

18.
Many process characteristics follow an exponential distribution, and control charts based on such a distribution have attracted a lot of attention. However, traditional control limits may be not appropriate because of the lack of symmetry. In this paper, process monitoring through a normalizing power transformation is studied. The traditional individual measurement control charts can be used based on the transformed data. The properties of this control chart are investigated. A comparison with the chart when using probability limits is also carried out for cases of known and estimated parameters. Without losing much accuracy, even compared with the exact probability limits, the power transformation approach can easily be used to produce charts that can be interpreted when the normality assumption is valid.  相似文献   

19.
Clinical trials with event‐time outcomes as co‐primary contrasts are common in many areas such as infectious disease, oncology, and cardiovascular disease. We discuss methods for calculating the sample size for randomized superiority clinical trials with two correlated time‐to‐event outcomes as co‐primary contrasts when the time‐to‐event outcomes are exponentially distributed. The approach is simple and easily applied in practice. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper various types of EWMA control charts are introduced for the simultaneous monitoring of the mean and the autocovariances. The target process is assumed to be a stationary process up to fourth-order or an ARMA process with heavy tailed innovations. The case of a Gaussian process is included in our results as well. The charts are compared within a simulation study. As a measure of the performance the average run length is taken. The target process is an ARMA (1,1) process with Student-t distributed innovations. The behavior of the charts is analyzed with respect to several out-of-control models. The best design parameters are determined for each chart. Our comparisons show that the multivariate EWMA chart applied to the residuals has the best overall performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号