首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
By means of a search design one is able to search for and estimate a small set of non‐zero elements from the set of higher order factorial interactions in addition to estimating the lower order factorial effects. One may be interested in estimating the general mean and main effects, in addition to searching for and estimating a non‐negligible effect in the set of 2‐ and 3‐factor interactions, assuming 4‐ and higher‐order interactions are all zero. Such a search design is called a ‘main effect plus one plan’ and is denoted by MEP.1. Construction of such a plan, for 2m factorial experiments, has been considered and developed by several authors and leads to MEP.1 plans for an odd number m of factors. These designs are generally determined by two arrays, one specifying a main effect plan and the other specifying a follow‐up. In this paper we develop the construction of search designs for an even number of factors m, m≠6. The new series of MEP.1 plans is a set of single array designs with a well structured form. Such a structure allows for flexibility in arriving at an appropriate design with optimum properties for search and estimation.  相似文献   

2.
International Conference on Harmonization E10 concerns non-inferiority trials and the assessment of comparative efficacy, both of which often involve indirect comparisons. In the non-inferiority setting, there are clinical trial results directly comparing an experimental treatment with an active control, and clinical trial results directly comparing the active control with placebo, and there is an interest in the indirect comparison of the experimental treatment with placebo. In the comparative efficacy setting, there may be separate clinical trial results comparing each of two treatments with placebo, and there is interest in an indirect comparison of the treatments. First, we show that the sample size required for a trial intended to demonstrate superiority through an indirect comparison is always greater than the sample size required for a direct comparison. In addition, by introducing the concept of preservation of effect, we show that the hypothesis addressed in the two settings is identical. Our main result concerns the logical inconsistency between a reasonable criterion for preference of an experimental treatment to a standard treatment and existing regulatory guidance for approval of the experimental treatment on the basis of an indirect comparison. Specifically, the preferred treatment will not always meet the criterion for regulatory approval. This is due to the fact that the experimental treatment bears the burden of overcoming the uncertainty in the effect of the standard treatment. We consider an alternative approval criterion that avoids this logical inconsistency.  相似文献   

3.
There is debate within the osteoporosis research community about the relationship between the risk of osteoporotic fracture and the surrogate measures of fracture risk. Meta‐regression analyses based on summary data have shown a linear relationship between fracture risk and surrogate measures, whereas analyses based on individual patient data (IPD) have shown a nonlinear relationship. We investigated the association between changes in a surrogate measure of fracture incidence, in this case a bone turnover marker for resorption assessed in the three risedronate phase III clinical programmes, and incident osteoporosis‐related fracture risk using regression models based on patient‐level and trial‐level information. The relationship between osteoporosis‐related fracture risk and changes in bone resorption was different when analysed on the basis of IPD than when analysed on the basis of a meta‐analytic approach (i.e., meta‐regression) using summary data (e.g., treatment effect based on treatment group estimates). This inconsistency in our findings was consistent with those in the published literature. Meta‐regression based on summary statistics at the trial level is not expected to reflect causal relationships between a clinical outcome and surrogate measures. Analyses based on IPD make possible a more comprehensive analysis since all relevant data on a patient level are available. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

4.
The generalized semiparametric mixed varying‐coefficient effects model for longitudinal data can accommodate a variety of link functions and flexibly model different types of covariate effects, including time‐constant, time‐varying and covariate‐varying effects. The time‐varying effects are unspecified functions of time and the covariate‐varying effects are nonparametric functions of a possibly time‐dependent exposure variable. A semiparametric estimation procedure is developed that uses local linear smoothing and profile weighted least squares, which requires smoothing in the two different and yet connected domains of time and the time‐dependent exposure variable. The asymptotic properties of the estimators of both nonparametric and parametric effects are investigated. In addition, hypothesis testing procedures are developed to examine the covariate effects. The finite‐sample properties of the proposed estimators and testing procedures are examined through simulations, indicating satisfactory performances. The proposed methods are applied to analyze the AIDS Clinical Trial Group 244 clinical trial to investigate the effects of antiretroviral treatment switching in HIV‐infected patients before and after developing the T215Y antiretroviral drug resistance mutation. The Canadian Journal of Statistics 47: 352–373; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

7.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

8.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

9.
The crossover trial design (AB/BA design) is often used to compare the effects of two treatments in medical science because it performs within‐subject comparisons, which increase the precision of a treatment effect (i.e., a between‐treatment difference). However, the AB/BA design cannot be applied in the presence of carryover effects and/or treatments‐by‐period interaction. In such cases, Balaam's design is a more suitable choice. Unlike the AB/BA design, Balaam's design inflates the variance of an estimate of the treatment effect, thereby reducing the statistical power of tests. This is a serious drawback of the design. Although the variance of parameter estimators in Balaam's design has been extensively studied, the estimators of the treatment effect to improve the inference have received little attention. If the estimate of the treatment effect is obtained by solving the mixed model equations, the AA and BB sequences are excluded from the estimation process. In this study, we develop a new estimator of the treatment effect and a new test statistic using the estimator. The aim is to improve the statistical inference in Balaam's design. Simulation studies indicate that the type I error of the proposed test is well controlled, and that the test is more powerful and has more suitable characteristics than other existing tests when interactions are substantial. The proposed test is also applied to analyze a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

11.
Selection of treatments to fit the specific needs for a certain patient is one major challenge in modern medicine. Personalized treatments rely on established patient–treatment interactions. In recent years, various statistical methods for the identification and estimation of interactions between relevant covariates and treatment were proposed. In this article, different available methods for detection and estimation of a covariate–treatment interaction for a time-to-event outcome, namely the standard Cox regression model assuming a linear interaction, the fractional polynomials approach for interaction, the modified outcome approach, the local partial-likelihood approach, and STEPP (Subpopulation Treatment Effect Pattern Plots) were applied to data from the SPACE trial, a randomized clinical trial comparing stent-protected angioplasty (CAS) to carotid endarterectomy (CEA) in patients with symptomatic stenosis, with the aim to analyse the interaction between age and treatment. Time from primary intervention to the first relevant event (any stroke or death) was considered as outcome parameter. The analyses suggest a qualitative interaction between patient age and treatment indicating a lower risk after treatment with CAS compared to CEA for younger patients, while for elderly patients a lower risk after CEA was observed. Differences in the statistical methods regarding the observed results, applicability, and interpretation are discussed.  相似文献   

12.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
We derive inconsistency expressions for dynamic panel data estimators under error cross-sectional dependence generated by an unobserved common factor in both the fixed effect and the incidental trends case. We show that for a temporally dependent factor, the standard within groups (WG) estimator is inconsistent even as both N and T tend to infinity. Next we investigate the properties of the common correlated effects pooled (CCEP) estimator of Pesaran (2006) which eliminates the error cross-sectional dependence using cross-sectional averages of the data. In contrast to the static case, the CCEP estimator is only consistent when next to N also T tends to infinity. It is shown that for the most relevant parameter settings, the inconsistency of the CCEP estimator is larger than that of the infeasible WG estimator, which includes the common factors as regressors. Restricting the CCEP estimator results in a somewhat smaller inconsistency. The small sample properties of the various estimators are analyzed using Monte Carlo experiments. The simulation results suggest that the CCEP estimator can be used to estimate dynamic panel data models provided T is not too small. The size of N is of less importance.  相似文献   

14.
The borrowing of historical control data can be an efficient way to improve the treatment effect estimate of the current control group in a randomized clinical trial. When the historical and current control data are consistent, the borrowing of historical data can increase power and reduce Type I error rate. However, when these 2 sources of data are inconsistent, it may result in a combination of biased estimates, reduced power, and inflation of Type I error rate. In some situations, inconsistency between historical and current control data may be caused by a systematic variation in the measured baseline prognostic factors, which can be appropriately addressed through statistical modeling. In this paper, we propose a Bayesian hierarchical model that can incorporate patient‐level baseline covariates to enhance the appropriateness of the exchangeability assumption between current and historical control data. The performance of the proposed method is shown through simulation studies, and its application to a clinical trial design for amyotrophic lateral sclerosis is described. The proposed method is developed for scenarios involving multiple imbalanced prognostic factors and thus has meaningful implications for clinical trials evaluating new treatments for heterogeneous diseases such as amyotrophic lateral sclerosis.  相似文献   

15.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

16.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Mixture central polynomial models with qualitative factors are widely applied in many fields of research. In this paper, a method of finding A-optimal design for two degree mixture central polynomial model with qualitative factors will be proposed. The variance function will be given for getting the support points of the design. The A-optimality is confirmed by the equivalence theorem. In addition, this method also works effectively with higher degree models.  相似文献   

18.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   

19.
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3 :277–287) for two‐stage trials with mid‐trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re‐estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.  相似文献   

20.
Intention‐to‐treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long‐term outcomes study where non‐adherence to study drug is substantial, the on‐treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double‐blind, placebo‐controlled, event‐driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre‐specified sensitivity analyses were performed to assess the impact of non‐adherence on the estimated effect of cinacalcet. These analyses included lag‐censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag‐censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on‐treatment effect when substantial non‐adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号