首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well‐known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph‐based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid‐trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Statistical approaches for addressing multiplicity in clinical trials range from the very conservative (the Bonferroni method) to the least conservative the fixed sequence approach. Recently, several authors proposed methods that combine merits of the two extreme approaches. Wiens [2003. A fixed sequence Bonferroni procedure for testing multiple endpoints. Pharmaceutical Statist. 2003, 2, 211–215], for example, considered an extension of the Bonferroni approach where the type I error rate (α)(α) is allocated among the endpoints, however, testing proceeds in a pre-determined order allowing the type I error rate to be saved for later use as long as the null hypotheses are rejected. This leads to a higher power of the test in testing later null hypotheses. In this paper, we consider an extension of Wiens’ approach by taking into account correlations among endpoints for achieving higher flexibility in testing. We show strong control of the family-wise type I error rate for this extension and provide critical values and significance levels for testing up to three endpoints with equal correlations and show how to calculate them for other correlation structures. We also present results of a simulation experiment for comparing the power of the proposed method with those of Wiens’ and others. The results of this experiment show that the magnitude of the gain in power of the proposed method depends on the prospective ordering of testing of the endpoints, the magnitude of the treatment effects of the endpoints and the magnitude of correlation between endpoints. Finally, we consider applications of the proposed method for clinical trials with multiple time points and multiple doses, where correlations among endpoints frequently arise.  相似文献   

3.
In a clinical trial comparing drug with placebo, where there are multiple primary endpoints, we consider testing problems where an efficacious drug effect can be claimed only if statistical significance is demonstrated at the nominal level for all endpoints. Under the assumption that the data are multivariate normal, the multiple endpoint-testing problem is formulated. The usual testing procedure involves testing each endpoint separately at the same significance level using two-sample t-tests, and claiming drug efficacy only if each t-statistic is significant. In this paper we investigate properties of this procedure. We show that it is identical to both an intersection union test and the likelihood ratio test. A simple expression for the p-value is given. The level and power function are studied; it is shown that the test may be conservative and that it is biased. Computable bounds for the power function are established.  相似文献   

4.
We consider a class of closed multiple test procedures indexed by a fixed weight vector. The class includes the Holm weighted step-down procedure, the closed method using the weighted Fisher combination test, and the closed method using the weighted version of Simes’ test. We show how to choose weights to maximize average power, where “average power” is itself weighted by importance assigned to the various hypotheses.Numerical computations suggest that the optimal weights for the multiple test procedures tend to certain asymptotic configurations. These configurations offer numerical justification for intuitive multiple comparisons methods, such as downweighting variables found insignificant in preliminary studies, giving primary variables more emphasis, gatekeeping test strategies, pre-determined multiple testing sequences, and pre-determined sequences of families of tests. We establish that such methods fall within the envelope of weighted closed testing procedures, thus providing a unified view of fixed sequences, fixed sequences of families, and gatekeepers within the closed testing paradigm. We also establish that the limiting cases control the familywise error rate (or FWE), using well-known results about closed tests, along with the dominated convergence theorem.  相似文献   

5.
Partition testing in dose-response studies with multiple endpoints   总被引:1,自引:0,他引:1  
Dose-response studies with multiple endpoints can be formulated as closed testing or partition testing problems. When the endpoints are primary and secondary, whether the order in which the doses are to be tested is pre-determined or sample determined lead to different partitioning of the parameter space corresponding to the null hypotheses to be tested. We use the case of two doses and two endpoints to illustrate how to apply the partitioning principle to construct multiple tests that control the appropriate error rate. Graphical representation can be useful in visualizing the decision process.  相似文献   

6.
Multiple binary endpoints often occur in clinical trials and are usually correlated. Many multiple testing adjustment methods have been proposed to control familywise type I error rates. However, most of them disregard the correlation among the endpoints, for example, the commonly used Bonferroni correction, Bonferroni fixed-sequence (BFS) procedure, and its extension, the alpha-exhaustive fallback (AEF). Extending BFS by taking into account correlations among endpoints, Huque and Alosh proposed a flexible fixed-sequence (FFS) testing method, but this FFS method faces computational difficulty when there are four or more endpoints and the power of the first hypothesis does not depend on the correlations among endpoints. In dealing with these issues, Xie proposed a weighted multiple testing correction (WMTC) for correlated continuous endpoints and showed that the proposed method can easily handle hundreds of endpoints by using the R package and has higher power for testing the first hypothesis compared with the FFS and AEF methods. Since WMTC depends on the joint distribution of the endpoints, it is not clear whether WMTC still keeps those advantages when correlated binary endpoints are used. In this article, we evaluated the statistical power of WMTC method for correlated binary endpoints in comparison with the FFS, the AEF, the prospective alpha allocation scheme (PAAS), and the weighted Holm-Bonferroni methods. Furthermore the WMTC method and others are illustrated on a real dataset examining the circumstance of homicide in New York City.  相似文献   

7.
A method for controlling the familywise error rate combining the Bonferroni adjustment and fixed testing sequence procedures is proposed. This procedure allots Type I error like the Bonferroni adjustment, but allows the Type I error to accumulate whenever a null hypothesis is rejected. In this manner, power for hypotheses tested later in a prespecified order will be increased. The order of the hypothesis tests needs to be prespecified as in a fixed sequence testing procedure, but unlike the fixed sequence testing procedure all hypotheses can always be tested, allowing for an a priori method of concluding a difference in the various endpoints. An application will be in clinical trials in which mortality is a concern, but it is expected that power to distinguish a difference in mortality will be low. If the effect on mortality is larger than anticipated, this method allows a test with a prespecified method of controlling the Type I error rate. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
Some multiple comparison procedures are described for multiple armed studies. The procedures are appropriate for testing all hypotheses for comparing two endpoints and multiple test arms to a single control group, for example three different fixed doses compared to a placebo. The procedure assumes that among the two endpoints, one is designated as a primary endpoint such that for a given treatment arm, no hypothesis for the secondary endpoint can be rejected unless the hypothesis for the primary endpoint was rejected. The procedures described control the family-wise error rate in the strong sense at a specified level α.  相似文献   

9.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

10.
In this article, we propose a unified sequentially rejective test procedure for testing simultaneously the equality of several independent binomial proportions to a specified standard. The proposed test procedure is general enough to include some well-known multiple testing procedures such as the Ordinary Bonferroni procedure, Hochberg procedure and Rom procedure. It involves multiple tests of significance based on the simple binomial tests (exact or approximate) which can be easily found in many elementary standard statistics textbooks. Unlike the traditional Chi-square test of the overall hypothesis, the procedure can identify the subset of the binomial proportions, which are different from the prespecified standard with the control of the familywise type I error rate. Moreover, the power computation of the procedure is provided and the procedure is illustrated by two real examples from an ecological study and a carcinogenicity study.  相似文献   

11.
Phase II trials in oncology drug development are usually conducted to perform the initial assessment of treatment activity. The common designs in this setting, for example, Simon 2-stage designs, are often developed based on testing whether a parameter of interest, usually a proportion (e.g. response rate), is less than a certain level or not. These designs usually consider only one parameter. However, sometimes we may encounter situations where we need to consider not a single parameter, but multiple parameters. This paper presents a two-stage design in which both primary and secondary endpoints are utilized in the decision rules. The family-wise Type 1 error rate and statistical power of the proposed design are investigated under a variety of situations by means of Monte-Carlo simulations.  相似文献   

12.
The author considers studies with multiple dependent primary endpoints. Testing hypotheses with multiple primary endpoints may require unmanageably large populations. Composite endpoints consisting of several binary events may be used to reduce a trial to a manageable size. The primary difficulties with composite endpoints are that different endpoints may have different clinical importance and that higher‐frequency variables may overwhelm effects of smaller, but equally important, primary outcomes. To compensate for these inconsistencies, we weight each type of event, and the total number of weighted events is counted. To reflect the mutual dependency of primary endpoints and to make the weighting method effective in small clinical trials, we use the Bayesian approach. We assume a multinomial distribution of multiple endpoints with Dirichlet priors and apply the Bayesian test of noninferiority to the calculation of weighting parameters. We use composite endpoints to test hypotheses of superiority in single‐arm and two‐arm clinical trials. The composite endpoints have a beta distribution. We illustrate this technique with an example. The results provide a statistical procedure for creating composite endpoints. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

13.
We are concerned with the problem of estimating the treatment effects at the effective doses in a dose-finding study. Under monotone dose-response, the effective doses can be identified through the estimation of the minimum effective dose, for which there is an extensive set of statistical tools. In particular, when a fixed-sequence multiple testing procedure is used to estimate the minimum effective dose, Hsu and Berger (1999) show that the confidence lower bounds for the treatment effects can be constructed without the need to adjust for multiplicity. Their method, called the dose-response method, is simple to use, but does not account for the magnitude of the observed treatment effects. As a result, the dose-response method will estimate the treatment effects at effective doses with confidence bounds invariably identical to the hypothesized value. In this paper, we propose an error-splitting method as a variant of the dose-response method to construct confidence bounds at the identified effective doses after a fixed-sequence multiple testing procedure. Our proposed method has the virtue of simplicity as in the dose-response method, preserves the nominal coverage probability, and provides sharper bounds than the dose-response method in most cases.  相似文献   

14.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

15.
In the present paper we introduce a partially sequential sampling procedure to develop a nonparametric method for simultaneous testing. Our work, as in [U. Bandyopadhyay, A. Mukherjee, B. Purkait, Nonparametric partial sequential tests for patterned alternatives in multi-sample problems, Sequential Analysis 26 (4) (2007) 443–466], is motivated by an interesting investigation related to arsenic contamination in ground water. Here we incorporate the idea of multiple hypotheses testing as in [Y. Benjamini, T. Hochberg, Controlling the false discovery rate: A practical and powerful approach to multiple testing, Journal of Royal Statistical Society B 85 (1995) 289–300] in a typical way. We present some Monte Carlo studies related to the proposed procedure. We observe that the proposed sampling design minimizes the expected sample sizes in different situations. The procedure as a whole effectively describes the testing under dual pattern alternatives. We indicate in brief some large sample situations. We also present detailed analysis of a geological field survey data.  相似文献   

16.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

17.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Confirmatory randomized clinical trials with a stratified design may have ordinal response outcomes, ie, either ordered categories or continuous determinations that are not compatible with an interval scale. Also, multiple endpoints are often collected when 1 single endpoint does not represent the overall efficacy of the treatment. In addition, random baseline imbalances and missing values can add another layer of difficulty in the analysis plan. Therefore, the development of an approach that provides a consolidated strategy to all issues collectively is essential. For a real case example that is from a clinical trial comparing a test treatment and a control for the pain management for patients with osteoarthritis, which has all aforementioned issues, multivariate Mann‐Whitney estimators with stratification adjustment are applicable to the strictly ordinal responses with stratified design. Randomization based nonparametric analysis of covariance is applied to account for the possible baseline imbalances. Several approaches that handle missing values are provided. A global test followed by a closed testing procedure controls the family wise error rate in the strong sense for the analysis of multiple endpoints. Four outcomes indicating joint pain, stiffness, and functional status were analyzed collectively and also individually through the procedures. Treatment efficacy was observed in the combined endpoint as well as in the individual endpoints. The proposed approach is effective in addressing the aforementioned problems simultaneously and straightforward to implement.  相似文献   

19.
In this paper, we consider nonparametric multiple comparison procedures for unbalanced two-way factorial designs under a pure nonparametric framework. For multiple comparisons of treatments versus a control concerning the main effects or the simple factor effects, the limiting distribution of the associated rank statistics is proven to satisfy the multivariate totally positive of order two condition. Hence, asymptotically the proposed Hochberg procedure strongly controls the familywise type I error rate for the simultaneous testing of the individual hypotheses. In addition, we propose to employ Shaffer's modified version of Holm's stepdown procedure to perform simultaneous tests on all pairwise comparisons regarding the main or simple factor effects and to perform simultaneous tests on all interaction effects. The logical constraints in the corresponding hypothesis families are utilized to sharpen the rejective thresholds and improve the power of the tests.  相似文献   

20.
Multiple endpoints in clinical trials are usually correlated. To control the family-wise type I error rate, both Huque and Alosh's flexible fixed-sequence (FFS) testing method and Li and Mehrotra's adaptive α allocation approach (4A) have taken into account correlations among endpoints. I suggested a weighted multiple testing correction (WMTC) for correlated tests and compared it with FFS. However, the relationship between the 4A method and the FFS method or the relationship between the 4A method and the WMTC method has not been studied. In this paper, simulations are conducted to investigate these relationships. Tentative guidelines to help choosing an appropriate method are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号