首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In clinical practice, the profile of each subject's CD4 response from a longitudinal study may follow a ‘broken stick’ like trajectory, indicating multiple phases of increase and/or decline in response. Such multiple phases (changepoints) may be important indicators to help quantify treatment effect and improve management of patient care. Although it is a common practice to analyze complex AIDS longitudinal data using nonlinear mixed-effects (NLME) or nonparametric mixed-effects (NPME) models in the literature, NLME or NPME models become a challenge to estimate changepoint due to complicated structures of model formulations. In this paper, we propose a changepoint mixed-effects model with random subject-specific parameters, including the changepoint for the analysis of longitudinal CD4 cell counts for HIV infected subjects following highly active antiretroviral treatment. The longitudinal CD4 data in this study may exhibit departures from symmetry, may encounter missing observations due to various reasons, which are likely to be non-ignorable in the sense that missingness may be related to the missing values, and may be censored at the time of the subject going off study-treatment, which is a potentially informative dropout mechanism. Inferential procedures can be complicated dramatically when longitudinal CD4 data with asymmetry (skewness), incompleteness and informative dropout are observed in conjunction with an unknown changepoint. Our objective is to address the simultaneous impact of skewness, missingness and informative censoring by jointly modeling the CD4 response and dropout time processes under a Bayesian framework. The method is illustrated using a real AIDS data set to compare potential models with various scenarios, and some interested results are presented.  相似文献   

3.
In longitudinal data, missing observations occur commonly with incomplete responses and covariates. Missing data can have a ‘missing not at random’ mechanism, a non‐monotone missing pattern, and moreover response and covariates can be missing not simultaneously. To avoid complexities in both modelling and computation, a two‐stage estimation method and a pairwise‐likelihood method are proposed. The two‐stage estimation method enjoys simplicities in computation, but incurs more severe efficiency loss. On the other hand, the pairwise approach leads to estimators with better efficiency, but can be cumbersome in computation. In this paper, we develop a compromise method using a hybrid pairwise‐likelihood framework. Our proposed approach has better efficiency than the two‐stage method, but its computational cost is still reasonable compared to the pairwise approach. The performance of the methods is evaluated empirically by means of simulation studies. Our methods are used to analyse longitudinal data obtained from the National Population Health Study.  相似文献   

4.
We focus on regression analysis of irregularly observed longitudinal data which often occur in medical follow-up studies and observational investigations. The model for such data involves two processes: a longitudinal response process of interest and an observation process controlling observation times. Restrictive models and questionable assumptions, such as Poisson assumption and independent censoring time assumption, were posed in previous works for analysing longitudinal data. In this paper, we propose a more general model together with a robust estimation approach for longitudinal data with informative observation times and censoring times, and the asymptotic normalities of the proposed estimators are established. Both simulation studies and real data application indicate that the proposed method is promising.  相似文献   

5.
Crossover designs are popular in early phases of clinical trials and in bioavailability and bioequivalence studies. Assessment of carryover effects, in addition to the treatment effects, is a critical issue in crossover trails. The observed data from a crossover trial can be incomplete because of potential dropouts. A joint model for analyzing incomplete data from crossover trials is proposed in this article; the model includes a measurement model and an outcome dependent informative model for the dropout process. The informative-dropout model is compared with the ignorable-dropout model as specific cases of the latter are nested subcases of the proposed joint model. Markov chain sampling methods are used for Bayesian analysis of this model. The joint model is used to analyze depression score data from a clinical trial in women with late luteal phase dysphoric disorder. Interestingly, carryover effect is found to have a strong effect in the informative dropout model, but it is less significant when dropout is considered ignorable.  相似文献   

6.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   

7.
Waterfall plots are used to describe changes in tumor size observed in clinical studies. They are frequently used to illustrate the overall drug response in oncology clinical trials because of its simple representation of results. Unfortunately, this visual display suffers a number of limitations including (1) potential misguidance by masking the time dynamics of tumor size, (2) ambiguous labelling of the y‐axis, and (3) low data‐to‐ink ratio. We offer some alternatives to address these shortcomings and recommend moving away from waterfall plots to the benefit of plots showing the individual time profiles of sum of lesion diameters (according to RECIST). The spider plot presents the individual changes in tumor measurements over time relative to baseline tumor burden. Baseline tumor size is a well‐known confounding factor of drug effect which has to be accounted for when analyzing data in early clinical trials. While spider plots are conveniently correct for baseline tumor size, they cannot be presented in isolation. Indeed, percentage change from baseline has suboptimal statistical properties (including skewed distribution) and can be overly optimistic in favor of drug efficacy. We argued that plots of raw data (referred to as spaghetti plots) should always accompany spider plots to provide an equipoised illustration of the drug effect on lesion diameters.  相似文献   

8.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

9.
To demonstrate the treatment effect on structural damage in rheumatoid arthritis (RA) and psoriatic arthritis (PsA), radiographic images of hands and feet are scored according to Sharp scoring systems in randomized clinical trials. However, the quantification of such an effect is challenging because the overall mean progression is lack of clinical interpretation. This article attempts to shed a light on the statistical challenges resulted from its scoring methods and heterogeneity of the study population and proposes a mixture distribution model approach to fit radiographic progression data. With such a model, the drug effect is fully captured by the mean progression of those patients who would progress in the study period under the control treatment. The resulting regression model also lends a tool in examining prognostic factors for radiographic progression. Simulations have been carried out to evaluate the precision of the parameter estimation procedure. Using the data examples from RA and PsA, we will show that the mixture distribution approach provides a better goodness of fit and leads to a casual inference of the study drug, hence a clinically meaningful interpretation.  相似文献   

10.
We consider a regression analysis of longitudinal data in the presence of outcome‐dependent observation times and informative censoring. Existing approaches commonly require a correct specification of the joint distribution of longitudinal measurements, the observation time process, and informative censoring time under the joint modeling framework and can be computationally cumbersome due to the complex form of the likelihood function. In view of these issues, we propose a semiparametric joint regression model and construct a composite likelihood function based on a conditional order statistics argument. As a major feature of our proposed methods, the aforementioned joint distribution is not required to be specified, and the random effect in the proposed joint model is treated as a nuisance parameter. Consequently, the derived composite likelihood bypasses the need to integrate over the random effect and offers the advantage of easy computation. We show that the resulting estimators are consistent and asymptotically normal. We use simulation studies to evaluate the finite‐sample performance of the proposed method and apply it to a study of weight loss data that motivated our investigation.  相似文献   

11.
A variety of primary endpoints are used in clinical trials treating patients with severe infectious diseases, and existing guidelines do not provide a consistent recommendation. We propose to study simultaneously two primary endpoints, cure and death, in a comprehensive multistate cure‐death model as starting point for a treatment comparison. This technique enables us to study the temporal dynamic of the patient‐relevant probability to be cured and alive. We describe and compare traditional and innovative methods suitable for a treatment comparison based on this model. Traditional analyses using risk differences focus on one prespecified timepoint only. A restricted logrank‐based test of treatment effect is sensitive to ordered categories of responses and integrates information on duration of response. The pseudo‐value regression provides a direct regression model for examination of treatment effect via difference in transition probabilities. Applied to a topical real data example and simulation scenarios, we demonstrate advantages and limitations and provide an insight into how these methods can handle different kinds of treatment imbalances. The cure‐death model provides a suitable framework to gain a better understanding of how a new treatment influences the time‐dynamic cure and death process. This might help the future planning of randomised clinical trials, sample size calculations, and data analyses.  相似文献   

12.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale.  相似文献   

14.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

15.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Recently, there has been a great interest in the analysis of longitudinal data in which the observation process is related to the longitudinal process. In literature, the observation process was commonly regarded as a recurrent event process. Sometimes some observation duration may occur and this process is referred to as a recurrent episode process. The medical cost related to hospitalization is an example. We propose a conditional modeling approach that takes into account both informative observation process and observation duration. We conducted simulation studies to assess the performance of the method and applied it to a dataset of medical costs.  相似文献   

18.
For oncology drug development, phase II proof‐of‐concept studies have played a key role in determining whether or not to advance to a confirmatory phase III trial. With the increasing number of immunotherapies, efficient design strategies are crucial in moving successful drugs quickly to market. Our research examines drug development decision making under the framework of maximizing resource investment, characterized by benefit cost ratios (BCRs). In general, benefit represents the likelihood that a drug is successful, and cost is characterized by the risk adjusted total sample size of the phases II and III studies. Phase III studies often include a futility interim analysis; this sequential component can also be incorporated into BCRs. Under this framework, multiple scenarios can be considered. For example, for a given drug and cancer indication, BCRs can yield insights into whether to use a randomized control trial or a single‐arm study. Importantly, any uncertainty in historical control estimates that are used to benchmark single‐arm studies can be explicitly incorporated into BCRs. More complex scenarios, such as restricted resources or multiple potential cancer indications, can also be examined. Overall, BCR analyses indicate that single‐arm trials are favored for proof‐of‐concept trials when there is low uncertainty in historical control data and smaller phase III sample sizes. Otherwise, especially if the most likely to succeed tumor indication can be identified, randomized controlled trials may be a better option. While the findings are consistent with intuition, we provide a more objective approach.  相似文献   

19.
Despite advances in clinical trial design, failure rates near 80% in phase 2 and 50% in phase 3 have recently been reported. The challenges to successful drug development are particularly acute in central nervous system trials such as for pain, schizophrenia, mania, and depression because high‐placebo response rates lessen assay sensitivity, diminish estimated treatment effect sizes, and thereby decrease statistical power. This paper addresses the importance of rigorous patient selection in major depressive disorder trials through an enhanced enrichment paradigm. This approach led to a redefinition of an ongoing, blinded phase 3 trial algorithm for patient inclusion (1) to eliminate further randomization of transient placebo responders and (2) to exclude previously randomized transient responders from the primary analysis of the double blind phase of the trial. It is illustrated for a case study for the comparison between brexpiprazole + antidepressant therapy and placebo + antidepressant therapy. Analysis of the primary endpoint showed that efficacy of brexpiprazole versus placebo could not be established statistically if the original algorithm for identification of placebo responders was used, but the enhanced enrichment approach did statistically demonstrate efficacy. Additionally, the enhanced enrichment approach identified a target population with a clinically meaningful treatment effect. Through its successful identification of a target population, the innovative enhanced enrichment approach enabled the demonstration of a positive treatment effect in a very challenging area of depression research.  相似文献   

20.
Summary.  In longitudinal studies missing data are the rule not the exception. We consider the analysis of longitudinal binary data with non-monotone missingness that is thought to be non-ignorable. In this setting a full likelihood approach is complicated algebraically and can be computationally prohibitive when there are many measurement occasions. We propose a 'protective' estimator that assumes that the probability that a response is missing at any occasion depends, in a completely unspecified way, on the value of that variable alone. Relying on this 'protectiveness' assumption, we describe a pseudolikelihood estimator of the regression parameters under non-ignorable missingness, without having to model the missing data mechanism directly. The method proposed is applied to CD4 cell count data from two longitudinal clinical trials of patients infected with the human immunodeficiency virus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号