首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
The tumor burden (TB) process is postulated to be the primary mechanism through which most anticancer treatments provide benefit. In phase II oncology trials, the biologic effects of a therapeutic agent are often analyzed using conventional endpoints for best response, such as objective response rate and progression‐free survival, both of which causes loss of information. On the other hand, graphical methods including spider plot and waterfall plot lack any statistical inference when there is more than one treatment arm. Therefore, longitudinal analysis of TB data is well recognized as a better approach for treatment evaluation. However, longitudinal TB process suffers from informative missingness because of progression or death. We propose to analyze the treatment effect on tumor growth kinetics using a joint modeling framework accounting for the informative missing mechanism. Our approach is illustrated by multisetting simulation studies and an application to a nonsmall‐cell lung cancer data set. The proposed analyses can be performed in early‐phase clinical trials to better characterize treatment effect and thereby inform decision‐making. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Survival models involving frailties are commonly applied in studies where correlated event time data arise due to natural or artificial clustering. In this paper we present an application of such models in the animal breeding field. Specifically, a mixed survival model with a multivariate correlated frailty term is proposed for the analysis of data from over 3611 Brazilian Nellore cattle. The primary aim is to evaluate parental genetic effects on the trait length in days that their progeny need to gain a commercially specified standard weight gain. This trait is not measured directly but can be estimated from growth data. Results point to the importance of genetic effects and suggest that these models constitute a valuable data analysis tool for beef cattle breeding.  相似文献   

3.
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose‐escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose‐concentration and concentration‐QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug–drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose‐escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision‐making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Non-mixture cure models (NMCMs) are derived from a simplified representation of the biological process that takes place after treatment for cancer. These models are intended to represent the time from the end of treatment to the time of first recurrence of cancer in studies when a proportion of those treated are completely cured. However, for many studies overall survival is also of interest. A two-stage NMCM that estimates the overall survival from a combination of two cure models, one from end of treatment to first recurrence and one from first recurrence to death, is proposed. The model is applied to two studies of Ewing's tumor in young patients. Caution needs to be exercised when extrapolating from cure models fitted to short follow-up times, but these data and associated simulations show how, when follow-up is limited, a two-stage model can give more stable estimates of the cure fraction than a one-stage model applied directly to overall survival.  相似文献   

5.
We investigate longitudinal models having Brownian-motion covariance structure. We show that any such model can be viewed as arising from a related “timeless” classical linear model where sample sizes correspond to longitudinal observation times. This relationship is of practical impact when there are closed-form ANOVA tables for the related classical model. Such tables can be directly transformed into the analogous tables for the original longitudinal model. We in particular provide complete results for one-way fixed and random effects ANOVA on the drift parameter in Brownian motion, and illustrate its use in estimating heterogeneity in tumor growth rates.  相似文献   

6.
Abstract.  A simple and standard approach for analysing multistate model data is to model all transition intensities and then compute a summary measure such as the transition probabilities based on this. This approach is relatively simple to implement but it is difficult to see what the covariate effects are on the scale of interest. In this paper, we consider an alternative approach that directly models the covariate effects on transition probabilities in multistate models. Our new approach is based on binomial modelling and inverse probability of censoring weighting techniques and is very simple to implement by standard software. We show how to do flexible regression models with possibly time-varying covariate effects.  相似文献   

7.
A virologic marker, the number of HIV RNA copies or viral load, is currently used to evaluate antiretroviral (ARV) therapies in AIDS clinical trials. This marker can be used to assess the antiviral potency of therapies, but may be easily affected by clinical factors such as drug exposures and drug resistance as well as baseline characteristics during the long-term treatment evaluation process. HIV dynamic studies have significantly contributed to the understanding of HIV pathogenesis and ARV treatment strategies. Viral dynamic models can be formulated through differential equations, but there has been only limited development of statistical methodologies for estimating such models or assessing their agreement with observed data. This paper develops mechanism-based nonlinear differential equation models for characterizing long-term viral dynamics with ARV therapy. In this model we not only incorporate clinical factors (drug exposures, and susceptibility), but also baseline covariate (baseline viral load, CD4 count, weight, or age) into a function of treatment efficacy. A Bayesian nonlinear mixed-effects modeling approach is investigated with application to an AIDS clinical trial study. The effects of confounding interaction of clinical factors with covariate-based models are compared using the deviance information criteria (DIC), a Bayesian version of the classical deviance for model assessment, designed from complex hierarchical model settings. Relationships between baseline covariate combined with confounding clinical factors and drug efficacy are explored. In addition, we compared models incorporating each of four baseline covariates through DIC and some interesting findings are presented. Our results suggest that modeling HIV dynamics and virologic responses with consideration of time-varying clinical factors as well as baseline characteristics may play an important role in understanding HIV pathogenesis, designing new treatment strategies for long-term care of AIDS patients.  相似文献   

8.
Many phase I drug combination designs have been proposed to find the maximum tolerated combination (MTC). Due to the two‐dimension nature of drug combination trials, these designs typically require complicated statistical modeling and estimation, which limit their use in practice. In this article, we propose an easy‐to‐implement Bayesian phase I combination design, called Bayesian adaptive linearization method (BALM), to simplify the dose finding for drug combination trials. BALM takes the dimension reduction approach. It selects a subset of combinations, through a procedure called linearization, to convert the two‐dimensional dose matrix into a string of combinations that are fully ordered in toxicity. As a result, existing single‐agent dose‐finding methods can be directly used to find the MTC. In case that the selected linear path does not contain the MTC, a dose‐insertion procedure is performed to add new doses whose expected toxicity rate is equal to the target toxicity rate. Our simulation studies show that the proposed BALM design performs better than competing, more complicated combination designs.  相似文献   

9.
Many new anticancer agents can be combined with existing drugs, as combining a number of drugs may be expected to have a better therapeutic effect than monotherapy owing to synergistic effects. Furthermore, to drive drug development and to reduce the associated cost, there has been a growing tendency to combine these as phase I/II trials. With respect to phase I/II oncology trials for the assessment of dose combinations, in the existing methodologies in which efficacy based on tumor response and safety based on toxicity are modeled as binary outcomes, it is not possible to enroll and treat the next cohort of patients unless the best overall response has been determined in the current cohort. Thus, the trial duration might be potentially extended to an unacceptable degree. In this study, we proposed a method that randomizes the next cohort of patients in the phase II part to the dose combination based on the estimated response rate using all the available observed data upon determination of the overall response in the current cohort. We compared the proposed method to the existing method using simulation studies. These demonstrated that the percentage of optimal dose combinations selected in the proposed method is not less than that in the existing method and that the trial duration in the proposed method is shortened compared to that in the existing method. The proposed method meets both ethical and financial requirements, and we believe it has the potential to contribute to expedite drug development.  相似文献   

10.
Summary.  Longitudinal modelling of lung function in Duchenne's muscular dystrophy is complicated by a mixture of both growth and decline in lung function within each subject, an unknown point of separation between these phases and significant heterogeneity between individual trajectories. Linear mixed effects models can be used, assuming a single changepoint for all cases; however, this assumption may be incorrect. The paper describes an extension of linear mixed effects modelling in which random changepoints are integrated into the model as parameters and estimated by using a stochastic EM algorithm. We find that use of this 'mixture modelling' approach improves the fit significantly.  相似文献   

11.
Observational drug safety studies may be susceptible to confounding or protopathic bias. This bias may cause a spurious relationship between drug exposure and adverse side effect when none exists and may lead to unwarranted safety alerts. The spurious relationship may manifest itself through substantially different risk levels between exposure groups at the start of follow‐up when exposure is deemed too short to have any plausible biological effect of the drug. The restrictive proportional hazards assumption with its arbitrary choice of baseline hazard function renders the commonly used Cox proportional hazards model of limited use for revealing such potential bias. We demonstrate a fully parametric approach using accelerated failure time models with an illustrative safety study of glucose‐lowering therapies and show that its results are comparable against other methods that allow time‐varying exposure effects. Our approach includes a wide variety of models that are based on the flexible generalized gamma distribution and allows direct comparisons of estimated hazard functions following different exposure‐specific distributions of survival times. This approach lends itself to two alternative metrics, namely relative times and difference in times to event, allowing physicians more ways to communicate patient's prognosis without invoking the concept of risks, which some may find hard to grasp. In our illustrative case study, substantial differences in cancer risks at drug initiation followed by a gradual reduction towards null were found. This evidence is compatible with the presence of protopathic bias, in which undiagnosed symptoms of cancer lead to switches in diabetes medication. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Drug combinations in preclinical tumor xenograft studies are often assessed using fixed doses. Assessing the joint action of drug combinations with fixed doses has not been well developed in the literature. Here, an interaction index is proposed for fixed‐dose drug combinations in a subcutaneous tumor xenograft model. Furthermore, a bootstrap percentile interval of the interaction index is also developed. The joint action of two drugs can be assessed on the basis of confidence limits of the interaction index. Tumor xenograft data from actual two‐drug combination studies are analyzed to illustrate the proposed method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Dynamic Prediction by Landmarking in Event History Analysis   总被引:2,自引:0,他引:2  
Abstract.  This article advocates the landmarking approach that dynamically adjusts predictive models for survival data during the follow up. This updating is achieved by directly fitting models for the individuals still at risk at the landmark point. Using this approach, simple proportional hazards models are able to catch the development over time for models with time-varying effects of covariates or data with time-dependent covariates (biomarkers). To smooth the effect of the landmarking, sequences of models are considered with parametric effects of the landmark time point and fitted by maximizing appropriate pseudo log-likelihoods that extend the partial log-likelihood to cover the landmarking approach.  相似文献   

14.
变权重组合预测模型的局部加权最小二乘解法   总被引:2,自引:0,他引:2  
随着科学技术的不断进步,预测方法也得到了很大的发展,常见的预测方法就有数十种之多。而组合预测是将不同的预测方法组合起来,综合利用各个方法所提供的信息,其效果往往优于单一的预测方法,故得到了广泛的应用。而基于变系数模型的思想研究了组合预测模型,将变权重的求取转化为变系数模型中系数函数的估计问题,从而可以基于局部加权最小二乘方法求解,利用交叉证实法选取光滑参数。其结果表明所提方法预测精度很高,效果优于其他方法。  相似文献   

15.
Theoretical models of contagion and spillovers allow for asset-specific shocks that can be directly transmitted from one asset to another, as well as indirectly transmitted across uncorrelated assets through some intermediary mechanism. Standard multivariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models, however, provide estimates of volatilities and correlations based only on the direct transmission of shocks across assets. As such, spillover effects via an intermediary asset or market are not considered. In this article, a multivariate GARCH model is constructed that provides estimates of volatilities and correlations based on both directly and indirectly transmitted shocks. The model is applied to exchange rate and equity returns data. The results suggest that if a spillover component is observed in the data, the spillover augmented models provide significantly different volatility estimates compared to standard multivariate GARCH models.  相似文献   

16.
Adaptive clinical trials typically involve several independent stages. The P‐values from each stage are synthesized through a so‐called combination function which ensures that the overall test will be valid if the stagewise tests are valid. In practice however, approximate and possibly invalid stagewise tests are used. This paper studies how imperfections of the stagewise tests feed through into the combination test. Several general results are proven including some for discrete models. An approximation formula which directly links the combined size accuracy to the component size accuracy is given. In the wider context of adaptive clinical trials, the main conclusion is that the basic tests used should be size accurate at nominal sizes both much larger and also much smaller than nominal desired size. For binary outcomes, the implication is that the parametric bootstrap should be used.  相似文献   

17.
Under the Loewe additivity, constant relative potency between two drugs is a sufficient condition for the two drugs to be additive. Implicit in this condition is that one drug acts like a dilution of the other. Geometrically, it means that the dose‐response curve of one drug is a copy of another that is shifted horizontally by a constant over the log‐dose axis. Such phenomenon is often referred to as parallelism. Thus, testing drug additivity is equivalent to the demonstration of parallelism between two dose‐response curves. Current methods used for testing parallelism are usually based on significance tests for differences between parameters in the dose‐response curves of the monotherapies. A p‐value of less than 0.05 is indicative of non‐parallelism. The p‐value‐based methods, however, may be fundamentally flawed because an increase in either sample size or precision of the assay used to measure drug effect may result in more frequent rejection of parallel lines for a trivial difference. Moreover, similarity (difference) between model parameters does not necessarily translate into the similarity (difference) between the two response curves. As a result, a test may conclude that the model parameters are similar (different), yet there is little assurance on the similarity between the two dose‐response curves. In this paper, we introduce a Bayesian approach to directly test the hypothesis that the two drugs have a constant relative potency. An important utility of our proposed method is in aiding go/no‐go decisions concerning two drug combination studies. It is illustrated with both a simulated example and a real‐life example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples that can be collected in individuals is limited and the sampling times must be as flexible as possible. Therefore to optimize joint design for several drugs (i.e., to determine a compromise between informative times that best characterize each drug's kinetics), we proposed to use a compound optimality criterion based on the expected population Fisher information matrix in nonlinear mixed effect models. This criterion allows weighting different models, which might be useful to take into account the importance accorded to each target in a phenotyping test. We also computed windows around the optimal times based on recursive random sampling and Monte‐Carlo simulation while maintaining a reasonable level of efficiency for parameter estimation. We illustrated this strategy for two drugs often included in phenotyping cocktails, midazolam (probe for CYP3A) and digoxin (P‐glycoprotein), based on the data of a previous study, and were able to find a sparse and flexible design. The obtained design was evaluated by clinical trial simulations and shown to be efficient for the estimation of population and individual parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The transformed likelihood approach to estimation of fixed effects dynamic panel data models is shown to present very good inferential properties but it is not directly implemented in the most diffused statistical software. The present paper aims at showing how a simple model reformulation can be adopted to describe the problem in terms of classical linear mixed models. The transformed likelihood approach is based on the first differences data transformation, the following results derive from a convenient reformulation in terms of deviations from the first observations. Given the invariance to data transformation, the likelihood functions defined in the two cases coincide. Resulting in a classical random effect linear model form, the proposed approach significantly improves the number of available estimation procedures and provides a straightforward interpretation for the parameters. Moreover, the proposed model specification allows to consider all the estimation improvements typical of the random effects model literature. Simulation studies are conducted in order to study the robustness of the estimation method to mean stationarity violation.  相似文献   

20.
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre‐clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre‐clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log‐normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out‐perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号