首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation–maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   

3.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

4.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

5.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

7.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

8.
In Japan rarely repeated are the comparative clinical trials with the same pair of the test and control drugs. The simple approach fur a complete iaudomized block design regarding the trials as blocks cannot therefore be applied It; strengthening the evidence uf the difference between the two drugs. In tins paper a method is discussed lo KTOVCI minimal lull hum the ruliliei ted trials in which those two drugs are respectively compared with the common third drug. The method consists of testing homogeneity of trials, forming a combined est illicit oi uf the n odds latius if the homogeneity is yeiified aiid giving the asymptotic variance of the combined estimator.In particular an approach of the multiple comparisons is taken so as to give a homogeneous subset of trials when the overall homogeneity is not satisfied. Although the paper is motivated from the comparative clinical trials, the resulting method can be applied to general incomplete block experiments if the outcomes are binoinial variables and the nil-interaction hetn-ern thc trtatnlent and block is suspicious. On the other hand in the appplications to clinical trials the other. non mathenlatical conditions for a meta-analysis such as the coincidence of tllc protocol should be satisfied as discussed by Hedges and Olkin(1983).  相似文献   

9.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

10.
Diagnostic odds ratio is defined as the ratio of the odds of the positivity of a diagnostic test results in the diseased population relative to that in the non-diseased population. It is a function of sensitivity and specificity, which can be seen as an indicator of the diagnostic accuracy for the evaluation of a biomarker/test. The naïve estimator of diagnostic odds ratio fails when either sensitivity or specificity is close to one, which leads the denominator of diagnostic odds ratio equal to zero. We propose several methods to adjust for such situation. Agresti and Coull’s adjustment is a common and straightforward way for extreme binomial proportions. Alternatively, estimation methods based on a more advanced sampling design can be applied, which systematically selects samples from underlying population based on judgment ranks. Under such design, the odds can be estimated by the sum of indicator functions and thus avoid the situation of dividing by zero and provide a valid estimation. The asymptotic mean and variance of the proposed estimators are derived. All methods are readily applied for the confidence interval estimation and hypothesis testing for diagnostic odds ratio. A simulation study is conducted to compare the efficiency of the proposed methods. Finally, the proposed methods are illustrated using a real dataset.  相似文献   

11.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Some multicenter randomized controlled trials (e.g. for rare diseases or with slow recruitment) involve many centers with few patients in each. Under within-center randomization, some centers might not assign each treatment to at least one patient; hence, such centers have no within-center treatment effect estimates and the center-stratified treatment effect estimate can be inefficient, perhaps to an extent with statistical and ethical implications. Recently, combining complete and incomplete centers with a priori weights has been suggested. However, a concern is whether using the incomplete centers increases bias. To study this concern, an approach with randomization models for a finite population was used to evaluate bias of the usual complete center estimator, the simple center-ignoring estimator, and the weighted estimator combining complete and incomplete centers. The situation with two treatments and many centers, each with either one or two patients, was evaluated. Various patient accrual mechanisms were considered, including one involving selection bias. The usual complete center estimator and the weighted estimator were unbiased under the overall null hypothesis, even with selection bias. An actual dermatology clinical trial motivates and illustrates these methods.  相似文献   

13.
Analyses of randomised trials are often based on regression models which adjust for baseline covariates, in addition to randomised group. Based on such models, one can obtain estimates of the marginal mean outcome for the population under assignment to each treatment, by averaging the model‐based predictions across the empirical distribution of the baseline covariates in the trial. We identify under what conditions such estimates are consistent, and in particular show that for canonical generalised linear models, the resulting estimates are always consistent. We show that a recently proposed variance estimator underestimates the variance of the estimator around the true marginal population mean when the baseline covariates are not fixed in repeated sampling and provide a simple adjustment to remedy this. We also describe an alternative semiparametric estimator, which is consistent even when the outcome regression model used is misspecified. The different estimators are compared through simulations and application to a recently conducted trial in asthma.  相似文献   

14.
In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008).  相似文献   

15.
Phase II trials in oncology drug development are usually conducted to perform the initial assessment of treatment activity. The common designs in this setting, for example, Simon 2-stage designs, are often developed based on testing whether a parameter of interest, usually a proportion (e.g. response rate), is less than a certain level or not. These designs usually consider only one parameter. However, sometimes we may encounter situations where we need to consider not a single parameter, but multiple parameters. This paper presents a two-stage design in which both primary and secondary endpoints are utilized in the decision rules. The family-wise Type 1 error rate and statistical power of the proposed design are investigated under a variety of situations by means of Monte-Carlo simulations.  相似文献   

16.
In assessing biosimilarity between two products, the question to ask is always “How similar is similar?” Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.  相似文献   

17.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

18.
An alternative to the conventional sample quantlle Is proposed as a nonparametrlc estimator of a continuous population quantlle.The alternative estimator Is a "generalized sample quantlle" obtained by averaging an appropriate subsample quantlle over all subsamples of .a fixed size.Since the resulting statistic is a U-statistic with representation also as a linear combination of order statistics, known results are employed then to establish asymptotic normality.The alternative estimator is shown to be asymptotically efficient in the class of nonparametrlc models specified by Pfanzagl (1975).Analytic results and Monte Carlo studies with a moderate sample size for a variety of distributions Indicate that the proposed estimator usually provides mean square error of estimation less than that of the conventional sample quantile.  相似文献   

19.
We introduce a new two-sample inference procedure to assess the relative performance of two groups over time. Our model-free method does not assume proportional hazards, making it suitable for scenarios where nonproportional hazards may exist. Our procedure includes a diagnostic tau plot to identify changes in hazard timing and a formal inference procedure. The tau-based measures we develop are clinically meaningful and provide interpretable estimands to summarize the treatment effect over time. Our proposed statistic is a U-statistic and exhibits a martingale structure, allowing us to construct confidence intervals and perform hypothesis testing. Our approach is robust with respect to the censoring distribution. We also demonstrate how our method can be applied for sensitivity analysis in scenarios with missing tail information due to insufficient follow-up. Without censoring, Kendall's tau estimator we propose reduces to the Wilcoxon-Mann–Whitney statistic. We evaluate our method using simulations to compare its performance with the restricted mean survival time and log-rank statistics. We also apply our approach to data from several published oncology clinical trials where nonproportional hazards may exist.  相似文献   

20.
Two‐stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean‐adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean‐squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号