首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In early drug development, especially when studying new mechanisms of action or in new disease areas, little is known about the targeted or anticipated treatment effect or variability estimates. Adaptive designs that allow for early stopping but also use interim data to adapt the sample size have been proposed as a practical way of dealing with these uncertainties. Predictive power and conditional power are two commonly mentioned techniques that allow predictions of what will happen at the end of the trial based on the interim data. Decisions about stopping or continuing the trial can then be based on these predictions. However, unless the user of these statistics has a deep understanding of their characteristics important pitfalls may be encountered, especially with the use of predictive power. The aim of this paper is to highlight these potential pitfalls. It is critical that statisticians understand the fundamental differences between predictive power and conditional power as they can have dramatic effects on decision making at the interim stage, especially if used to re-evaluate the sample size. The use of predictive power can lead to much larger sample sizes than either conditional power or standard sample size calculations. One crucial difference is that predictive power takes account of all uncertainty, parts of which are ignored by standard sample size calculations and conditional power. By comparing the characteristics of each of these statistics we highlight important characteristics of predictive power that experimenters need to be aware of when using this approach.  相似文献   

2.
《Econometric Reviews》2012,31(1):1-26
Abstract

This paper proposes a nonparametric procedure for testing conditional quantile independence using projections. Relative to existing smoothed nonparametric tests, the resulting test statistic: (i) detects the high frequency local alternatives that converge to the null hypothesis in probability at faster rate and, (ii) yields improvements in the finite sample power when a large number of variables are included under the alternative. In addition, it allows the researcher to include qualitative information and, if desired, direct the test against specific subsets of alternatives without imposing any functional form on them. We use the weighted Nadaraya-Watson (WNW) estimator of the conditional quantile function avoiding the boundary problems in estimation and testing and prove weak uniform consistency (with rate) of the WNW estimator for absolutely regular processes. The procedure is applied to a study of risk spillovers among the banks. We show that the methodology generalizes some of the recently proposed measures of systemic risk and we use the quantile framework to assess the intensity of risk spillovers among individual financial institutions.  相似文献   

3.
Consider a non-homogeneous Poisson process, N(t), with mean value function Λ(t) and intensity function λ(t). A conditional test of the hypothesis that the process is homogeneous, versus alternatives for which Λ(t) is superadditive, was proposed by Hollander and Proschan (1974). A new test for superadditivity of Λ(t), which is based on a linear combination of the occurrence times of the process N{t) is suggested in this paper. Though this test has the same Pitman efficiency as the Hollander-Proschan test, it is shown by Monte-Carlo simulation that our test has more power for many important alternatives. Tables for the exact null distribution of the test statistic have been given.  相似文献   

4.
Chronic disease processes often feature transient recurrent adverse clinical events. Treatment comparisons in clinical trials of such disorders must be based on valid and efficient methods of analysis. We discuss robust strategies for testing treatment effects with recurrent events using methods based on marginal rate functions, partially conditional rate functions, and methods based on marginal failure time models. While all three approaches lead to valid tests of the null hypothesis when robust variance estimates are used, they differ in power. Moreover, some approaches lead to estimators of treatment effect which are more easily interpreted than others. To investigate this, we derive the limiting value of estimators of treatment effect from marginal failure time models and illustrate their dependence on features of the underlying point process, as well as the censoring mechanism. Through simulation, we show that methods based on marginal failure time distributions are shown to be sensitive to treatment effects delaying the occurrence of the very first recurrences. Methods based on marginal or partially conditional rate functions perform well in situations where treatment effects persist or in settings where the aim is to summarizee long-term data on efficacy.  相似文献   

5.
The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aims to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya–Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. We ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test which is based on a linear mean-regression, we find that VIX index predicts excess returns both at short and long horizons.  相似文献   

6.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

7.
With the advances in human genomic/genetic studies, the clinical trial community gradually recognizes that phenotypically homogeneous patients may be heterogeneous at the genomic level. The genomic technology brings a possible avenue for developing a genomic (composite) biomarker to predict a genomically responsive patient subset that may have a (much) higher likelihood of benefiting from a treatment. Randomized controlled trial is the mainstay to provide scientifically convincing evidence of a purported effect a new treatment may demonstrate. In conventional clinical trials, the primary clinical hypothesis pertains to the therapeutic effect in all patients who are eligible for the study defined by the primary efficacy endpoint. The aspect of one-size-fits-all surrounding the conventional design has been challenged, particularly when the diseases may be heterogeneous due to observable clinical characteristics and/or unobservable underlying the genomic characteristics. Extension from the conventional single population design objective to an objective that encompasses two possible patient populations will allow more informative evaluation in the patients having different degrees of responsiveness to medication. Building in conventional clinical trials, an additional genomic objective can generate an appealing conceptual framework from the patient's perspective in addressing personalized medicine in well-controlled clinical trials. There are many perceived benefits of personalized medicine that are based on the notion of being genomically proactive in the identification of disease and prevention of disease or recurrence. In this paper, we show that an adaptive design approach can be constructed to study a clinical hypothesis of overall treatment effect and a hypothesis of treatment effect in a genomic subset more efficiently than the conventional non-adaptive approach.  相似文献   

8.
ABSTRACT

A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation-generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which represents the null model, whereas the other one describes the misspecification. A simulation study shows that the test has good finite sample properties. We compare the test with other tests for misspecification of multivariate GARCH models. The test has high power against alternatives where the misspecification is in the GARCH parameters and is superior to other tests. The test is not greatly affected by misspecification in the conditional correlations and is therefore well suited for considering misspecification of GARCH equations.  相似文献   

9.
We provide a consistent specification test for generalized autoregressive conditional heteroscedastic (GARCH (1,1)) models based on a test statistic of Cramér‐von Mises type. Because the limit distribution of the test statistic under the null hypothesis depends on unknown quantities in a complicated manner, we propose a model‐based (semiparametric) bootstrap method to approximate critical values of the test and to verify its asymptotic validity. Finally, we illuminate the finite sample behaviour of the test by some simulations.  相似文献   

10.
The European Agency for the Evaluation of Medicinal Products has recently completed the consultation of a draft guidance on how to implement conditional approval. This route of application is available for orphan drugs, emergency situations and serious debilitating or life-threatening diseases. Although there has been limited experience in implementing conditional approval to date, PSI (Statisticians in the Pharmaceutical Industry) sponsored a meeting of pharmaceutical statisticians with an interest in the area to discuss potential issues. This article outlines the issues raised and resulting discussions, based on the group's interpretation of the legislation. Conditional approval seems to fit well with the accepted regulatory strategy in HIV. In oncology, conditional approval may be most likely when (a) compelling phase II data are available using accepted clinical outcomes (e.g. progression/recurrence-free survival or overall survival) and Phase III has been planned or started, or (b) when data are available using a surrogate endpoint for clinical outcome (e.g. response rate or biochemical measures) from a single-arm study in rare tumours with high response, compared with historical data. The use of interim analyses in Phase III for supporting conditional approval raises some challenging issues regarding dissemination of information, maintenance of blinding, potential introduction of bias, ethics, switching, etc.  相似文献   

11.
We propose and study properties of maximum likelihood estimators in the class of conditional transformation models. Based on a suitable explicit parameterization of the unconditional or conditional transformation function, we establish a cascade of increasingly complex transformation models that can be estimated, compared and analysed in the maximum likelihood framework. Models for the unconditional or conditional distribution function of any univariate response variable can be set up and estimated in the same theoretical and computational framework simply by choosing an appropriate transformation function and parameterization thereof. The ability to evaluate the distribution function directly allows us to estimate models based on the exact likelihood, especially in the presence of random censoring or truncation. For discrete and continuous responses, we establish the asymptotic normality of the proposed estimators. A reference software implementation of maximum likelihood‐based estimation for conditional transformation models that allows the same flexibility as the theory developed here was employed to illustrate the wide range of possible applications.  相似文献   

12.
In each study testing the survival experience of one or more populations, one must not only choose an appropriate class of tests, but further an appropriate weight function. As the optimal choice depends on the true shape of the hazard ratio, one is often not capable of getting the best results with respect to a specific dataset. For the univariate case several methods were proposed to conquer this problem. However, most of the interesting datasets contain multivariate observations nowadays. In this work we propose a multivariate version of a method based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. By considering the conditional hazards, we take the correlation between the components into account with the goal of obtaining a test that exhibits a high power irrespective of the shape of the hazard ratio under the alternative hypothesis.  相似文献   

13.
ABSTRACT

This article considers the problem of testing equality of parameters of two exponential distributions having common known coefficient of variation, both under unconditional and conditional setup. Unconditional tests based on BLUE'S and LRT are considered. Using the Conditionality Principle of Fisher, an UMP conditional test for one-sided alternative is derived by conditioning on an ancillary. This test is seen to be uniformly more powerful than unconditional tests in certain given ranges of ancillary. Simulation studies on the power functions of the tests are done for this purpose.  相似文献   

14.
We introduce an omnibus goodness-of-fit test for statistical models for the conditional distribution of a random variable. In particular, this test is useful for assessing whether a regression model fits a data set on all its assumptions. The test is based on a generalization of the Cramér–von Mises statistic and involves a local polynomial estimator of the conditional distribution function. First, the uniform almost sure consistency of this estimator is established. Then, the asymptotic distribution of the test statistic is derived under the null hypothesis and under contiguous alternatives. The extension to the case where unknown parameters appear in the model is developed. A simulation study shows that the test has good power against some common departures encountered in regression models. Moreover, its power is comparable to that of other nonparametric tests designed to examine only specific departures.  相似文献   

15.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Results are given of an empirical power study of three statistical procedures for testing for exponentiality of several independent samples. The test procedures are the Tiku (1974) test, a multi-sample Durbin (1975) test, and a multi-sample Shapiro–Wilk (1972) test. The alternative distributions considered in the study were selected from the gamma, Weibull, Lomax, lognormal, inverse Gaussian, and Burr families of positively skewed distributions. The general behavior of the conditional mean exceedance function is used to classify each alternative distribution. It is shown that Tiku's test generally exhibits overall greater power than either of the other two test procedures. For certain alternative distributions, Shapiro–Wilk's test is superior when the sample sizes are small.  相似文献   

17.
In this paper, we obtain a generalized moment identity for the case when the distributions of the random variables are not necessarily purely discrete or absolutely continuous. The proposed identity is useful to find the generator which has been used for the approximation of distributions by Stein's method. Apparently, a new approach is discussed for the approximation of distributions by Stein's method. We bring the characterization based on the relationship between conditional expectations and hazard measure in our unified framework. As an application, a new lower bound to the mean-squared error is obtained and it is compared with Bayesian Cramer–Rao bound.  相似文献   

18.
This article considers the twin problems of testing for autoregressive conditional heteroscedasticity (ARCH) and generalized ARCH disturbances in the linear regression model. A feature of these testing problems, ignored by the standard Lagrange multiplier test, is that they are onesided in nature. A test that exploits this one-sided aspect is constructed based on the sum of the scores. The small-sample-size and power properties of two versions of this test under both normal and leptokurtic disturbances are investigated via a Monte Carlo experiment. The results indicate that both versions of the new test typically have superior power to two versions of the Lagrange multiplier test and possibly also more accurate asymptotic critical values.  相似文献   

19.
The Kolassa method implemented in the nQuery Advisor software has been widely used for approximating the power of the Wilcoxon–Mann–Whitney (WMW) test for ordered categorical data, in which Edgeworth approximation is used to estimate the power of an unconditional test based on the WMW U statistic. When the sample size is small or when the sizes in the two groups are unequal, Kolassa’s method may yield quite poor approximation to the power of the conditional WMW test that is commonly implemented in statistical packages. Two modifications of Kolassa’s formula are proposed and assessed by simulation studies.  相似文献   

20.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号