首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

2.
We consider the study of censored survival times in the situation where the available data consist of both eligible and ineligible subjects, and information distinguishing the two groups is sometimes missing. A complete-case analysis in this context would use only subjects known to be eligible, resulting in inefficient and potentially biased estimators. We propose a two-step procedure which resembles the EM algorithm but is computationally much faster. In the first step, one estimates the conditional expectation of the missing eligibility indicators given the observed data using a logistic regression based on the complete cases (i.e., subjects with non-missing eligibility indicator). In the second step, maximum likelihood estimators are obtained from a weighted Cox proportional hazards model, with the weights being either observed eligibility indicators or estimated conditional expectations thereof. Under ignorable missingness, the estimators from the second step are proven to be consistent and asymptotically normal, with explicit variance estimators. We demonstrate through simulation that the proposed methods perform well for moderate sized samples and are robust in the presence of eligibility indicators that are missing not at random. The proposed procedure is more efficient and more robust than the complete case analysis and, unlike the EM algorithm, does not require time-consuming iteration. Although the proposed methods are applicable generally, they would be most useful for large data sets (e.g., administrative data), for which the computational savings outweigh the price one has to pay for making various approximations in avoiding iteration. We apply the proposed methods to national kidney transplant registry data.  相似文献   

3.
Sample sizes of Phase 2 dose-finding studies, usually determined based on a power requirement to detect a significant dose–response relationship, will generally not provide adequate precision for Phase 3 target dose selection. We propose to calculate the sample size of a dose-finding study based on the probability of successfully identifying the target dose within an acceptable range (e.g., 80%–120% of the target) using the multiple comparison and modeling procedure (MCP-Mod). With the proposed approach, different design options for the Phase 2 dose-finding study can also be compared. Due to inherent uncertainty around an assumed true dose–response relationship, sensitivity analyses to assess the robustness of the sample size calculations to deviations from modeling assumptions are recommended. Planning for a hypothetical Phase 2 dose-finding study is used to illustrate the main points. Codes for the proposed approach is available at https://github.com/happysundae/posMCPMod .  相似文献   

4.
5.
Traditionally, in clinical development plan, phase II trials are relatively small and can be expected to result in a large degree of uncertainty in the estimates based on which Phase III trials are planned. Phase II trials are also to explore appropriate primary efficacy endpoint(s) or patient populations. When the biology of the disease and pathogenesis of disease progression are well understood, the phase II and phase III studies may be performed in the same patient population with the same primary endpoint, e.g. efficacy measured by HbA1c in non-insulin dependent diabetes mellitus trials with treatment duration of at least three months. In the disease areas that molecular pathways are not well established or the clinical outcome endpoint may not be observed in a short-term study, e.g. mortality in cancer or AIDS trials, the treatment effect may be postulated through use of intermediate surrogate endpoint in phase II trials. However, in many cases, we generally explore the appropriate clinical endpoint in the phase II trials. An important question is how much of the effect observed in the surrogate endpoint in the phase II study can be translated into the clinical effect in the phase III trial. Another question is how much of the uncertainty remains in phase III trials. In this work, we study the utility of adaptation by design (not by statistical test) in the sense of adapting the phase II information for planning the phase III trials. That is, we investigate the impact of using various phase II effect size estimates on the sample size planning for phase III trials. In general, if the point estimate of the phase II trial is used for planning, it is advisable to size the phase III trial by choosing a smaller alpha level or a higher power level. The adaptation via using the lower limit of the one standard deviation confidence interval from the phase II trial appears to be a reasonable choice since it balances well between the empirical power of the launched trials and the proportion of trials not launched if a threshold lower than the true effect size of phase III trial can be chosen for determining whether the phase III trial is to be launched.  相似文献   

6.
Conditional (European Medicines Agency) or accelerated (U.S. Food and Drug Administration) approval of drugs allows earlier access to promising new treatments that address unmet medical needs. Certain post-marketing requirements must typically be met in order to obtain full approval, such as conducting a new post-market clinical trial. We study the applicability of the recently developed harmonic mean χ 2 -test to this conditional or accelerated approval framework. The proposed approach can be used both to support the design of the post-market trial and the analysis of the combined evidence provided by both trials. Other methods considered are the two-trials rule, Fisher's criterion and Stouffer's method. In contrast to some of the traditional methods, the harmonic mean χ 2 -test always requires a post-market clinical trial. If the p -value from the pre-market clinical trial is 0.025 , a smaller sample size for the post-market clinical trial is needed than with the two-trials rule. For illustration, we apply the harmonic mean χ 2 -test to a drug which received conditional (and later full) market licensing by the EMA. A simulation study is conducted to study the operating characteristics of the harmonic mean χ 2 -test and two-trials rule in more detail. We finally investigate the applicability of these two methods to compute the power at interim of an ongoing post-market trial. These results are expected to aid in the design and assessment of the required post-market studies in terms of the level of evidence required for full approval.  相似文献   

7.
With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches.  相似文献   

8.
Summary.  We consider joint spatial modelling of areal multivariate categorical data assuming a multiway contingency table for the variables, modelled by using a log-linear model, and connected across units by using spatial random effects. With no distinction regarding whether variables are response or explanatory, we do not limit inference to conditional probabilities, as in customary spatial logistic regression. With joint probabilities we can calculate arbitrary marginal and conditional probabilities without having to refit models to investigate different hypotheses. Flexible aggregation allows us to investigate subgroups of interest; flexible conditioning enables not only the study of outcomes given risk factors but also retrospective study of risk factors given outcomes. A benefit of joint spatial modelling is the opportunity to reveal disparities in health in a richer fashion, e.g. across space for any particular group of cells, across groups of cells at a particular location, and, hence, potential space–group interaction. We illustrate with an analysis of birth records for the state of North Carolina and compare with spatial logistic regression.  相似文献   

9.
Population pharmacokinetics (POPPK) has many important uses at various stages of drug development and approval. At the phase III stage, one of the major uses of POPPK is to identify covariate influences on human pharmacokinetics, which is important for potential dose adjustment and drug labeling. One common analysis approach is nonlinear mixed‐effect modeling, which typically involves time‐consuming extensive search for best fits among a large number of possible models. We propose that the analysis goal can be better achieved with a more standard confirmatory statistical analysis approach, which uses a prespecified primary analysis and additional sensitivity analyses. We illustrate this approach using a phase III study data set and compare the result with that calculated using the common exploratory approach. We argue that the confirmatory approach not only substantially reduces analysis time but also yields more accurate and interpretable results. Some aspects of this confirmatory approach may also be extended to data analysis in earlier stages of clinical drug development, i.e. phase II and phase I. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
This paper is concerned with the conditional feature screening for ultra-high dimensional right censored data with some previously identified important predictors. A new model-free conditional feature screening approach, conditional correlation rank sure independence screening, has been proposed and investigated theoretically. The suggested conditional screening procedure has several desirable merits. First, it is model free, and thus robust to model misspecification. Second, it has the advantage of robustness of heavy-tailed distributions of the response and the presence of potential outliers in response. Third, it is naturally applicable to complete data when there is no censoring. Through simulation studies, we demonstrate that the proposed approach outperforms the CoxCS of Hong et al. under some circumstances. A real dataset is used to illustrate the usefulness of the proposed conditional screening method.  相似文献   

11.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

12.
A multivariate change point control chart based on data depth (CPDP) is considered for detecting shifts in either the mean vector, the covariance matrix, or both of the processes for Phase I. The proposed chart is preferable from a robustness point of view, has attractive detection performance, and can be especially useful in Phase I analysis setting, where there is limited information about the underlying process. Comparison results and an illustrative example show that our CPDP chart has great potential for Phase I analysis of multivariate individual observations. The application of CPDP chart is illustrated in a real data example.  相似文献   

13.
Latent class analysis (LCA) has been found to have important applications in social and behavioural sciences for modelling categorical response variables, and non-response is typical when collecting data. In this study, the non-response mainly included ‘contingency questions’ and real ‘missing data’. The primary objective of this study was to evaluate the effects of some potential factors on model selection indices in LCA with non-response data. We simulated missing data with contingency question and evaluated the accuracy rates of eight information criteria for selecting the correct models. The results showed that the main factors are latent class proportions, conditional probabilities, sample size, the number of items, the missing data rate and the contingency data rate. Interactions of the conditional probabilities with class proportions, sample size and the number of items are also significant. From our simulation results, the impact of missing data and contingency questions can be amended by increasing the sample size or the number of items.  相似文献   

14.
We propose a new set of test statistics to examine the association between two ordinal categorical variables X and Y after adjusting for continuous and/or categorical covariates Z. Our approach first fits multinomial (e.g., proportional odds) models of X and Y, separately, on Z. For each subject, we then compute the conditional distributions of X and Y given Z. If there is no relationship between X and Y after adjusting for Z, then these conditional distributions will be independent, and the observed value of (X, Y) for a subject is expected to follow the product distribution of these conditional distributions. We consider two simple ways of testing the null of conditional independence, both of which treat X and Y equally, in the sense that they do not require specifying an outcome and a predictor variable. The first approach adds these product distributions across all subjects to obtain the expected distribution of (X, Y) under the null and then contrasts it with the observed unconditional distribution of (X, Y). Our second approach computes "residuals" from the two multinomial models and then tests for correlation between these residuals; we define a new individual-level residual for models with ordinal outcomes. We present methods for computing p-values using either the empirical or asymptotic distributions of our test statistics. Through simulations, we demonstrate that our test statistics perform well in terms of power and Type I error rate when compared to proportional odds models which treat X as either a continuous or categorical predictor. We apply our methods to data from a study of visual impairment in children and to a study of cervical abnormalities in human immunodeficiency virus (HIV)-infected women. Supplemental materials for the article are available online.  相似文献   

15.
We propose a general framework for regression models with functional response containing a potentially large number of flexible effects of functional and scalar covariates. Special emphasis is put on historical functional effects, where functional response and functional covariate are observed over the same interval and the response is only influenced by covariate values up to the current grid point. Historical functional effects are mostly used when functional response and covariate are observed on a common time interval, as they account for chronology. Our formulation allows for flexible integration limits including, e.g., lead or lag times. The functional responses can be observed on irregular curve-specific grids. Additionally, we introduce different parameterizations for historical effects and discuss identifiability issues.The models are estimated by a component-wise gradient boosting algorithm which is suitable for models with a potentially high number of covariate effects, even more than observations, and inherently does model selection. By minimizing corresponding loss functions, different features of the conditional response distribution can be modeled, including generalized and quantile regression models as special cases. The methods are implemented in the open-source R package FDboost. The methodological developments are motivated by biotechnological data on Escherichia coli fermentations, but cover a much broader model class.  相似文献   

16.

We present a new estimator of the restricted mean survival time in randomized trials where there is right censoring that may depend on treatment and baseline variables. The proposed estimator leverages prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. Under regularity conditions and random censoring within strata of treatment and baseline variables, the proposed estimator has the following features: (i) it is interpretable under violations of the proportional hazards assumption; (ii) it is consistent and at least as precise as the Kaplan–Meier and inverse probability weighted estimators, under identifiability conditions; (iii) it remains consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) it achieves the nonparametric efficiency bound when both of these distributions are consistently estimated. We illustrate the performance of our method using simulations based on resampling data from a completed, phase 3 randomized clinical trial of a new surgical treatment for stroke; the proposed estimator achieves a 12% gain in relative efficiency compared to the Kaplan–Meier estimator. The proposed estimator has potential advantages over existing approaches for randomized trials with time-to-event outcomes, since existing methods either rely on model assumptions that are untenable in many applications, or lack some of the efficiency and consistency properties (i)–(iv). We focus on estimation of the restricted mean survival time, but our methods may be adapted to estimate any treatment effect measure defined as a smooth contrast between the survival curves for each study arm. We provide R code to implement the estimator.

  相似文献   

17.
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
In this article we consider nonparametric estimation of a structural equation model under full additivity constraint. We propose estimators for both the conditional mean and gradient which are consistent, asymptotically normal, oracle efficient, and free from the curse of dimensionality. Monte Carlo simulations support the asymptotic developments. We employ a partially linear extension of our model to study the relationship between child care and cognitive outcomes. Some of our (average) results are consistent with the literature (e.g., negative returns to child care when mothers have higher levels of education). However, as our estimators allow for heterogeneity both across and within groups, we are able to contradict many findings in the literature (e.g., we do not find any significant differences in returns between boys and girls or for formal versus informal child care). Supplementary materials for this article are available online.  相似文献   

19.
We propose a profile conditional likelihood approach to handle missing covariates in the general semiparametric transformation regression model. The method estimates the marginal survival function by the Kaplan-Meier estimator, and then estimates the parameters of the survival model and the covariate distribution from a conditional likelihood, substituting the Kaplan-Meier estimator for the marginal survival function in the conditional likelihood. This method is simpler than full maximum likelihood approaches, and yields consistent and asymptotically normally distributed estimator of the regression parameter when censoring is independent of the covariates. The estimator demonstrates very high relative efficiency in simulations. When compared with complete-case analysis, the proposed estimator can be more efficient when the missing data are missing completely at random and can correct bias when the missing data are missing at random. The potential application of the proposed method to the generalized probit model with missing continuous covariates is also outlined.  相似文献   

20.
The expectation-maximization (EM) method facilitates computation of max¬imum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservabie complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-stcp. The EM algorithm is most effective when maximizing the iunction Q{0) denned in the F-stnp is easier than maximizing the likelihood function.

The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can he computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of thc standard EM algorithm avoiding computation of conditional expectations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号