首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 55 毫秒
1.
One characterization of group sequential methods uses alpha spending functions to allocate the false positive rate throughout a study. We consider and evaluate several such spending functions as well as the time points of the interim analyses at which they apply. In addition, we evaluate the double triangular test as an alternative procedure that allows for early termination of the trial not only due to efficacy differences between treatments, but also due to lack of such differences. We motivate and illustrate our work by reference to the analysis of survival data from a proposed oncology study. Such group sequential procedures with one or two interim analyses are only slightly less powerful than fixed sample trials, but provide for the strong possibility of early stopping. Therefore, in all situations where they can practically be applied, we recommend their routine use in clinical trials. The double triangular test provides a suitable alternative to the group sequential procedures in that they do not provide for early stopping with acceptance of the null hypothesis. Again, there is only a modest loss in power relative to fixed sample tests. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
The problem of simple linear calibration is not new and dates back to the late 1930's. In 1982 Brown presented a number of important results for the multivariate case. In this paper we extend Brown's work to cover the situation where one is interested in calibrating for an unknown q-vector X on the basis of an observed p-vector Y given that k≥l components of X are fixed in advance.

An outline of the theoretical development in the multivariate normal case will be given and the procedure illustrated with the application to previously published data.  相似文献   

3.
In reliability and life-testing experiments, the researcher is often interested in the effects of extreme or varying stress factors such as temperature, voltage and load on the lifetimes of experimental units. Step-stress test, which is a special class of accelerated life-tests, allows the experimenter to increase the stress levels at fixed times during the experiment in order to obtain information on the parameters of the life distributions more quickly than under normal operating conditions. In this paper, we consider the simple step-stress model under the exponential distribution when the available data are Type-I hybrid censored. We derive the maximum likelihood estimators (MLEs) of the parameters assuming a cumulative exposure model with lifetimes being exponentially distributed. The exact distributions of the MLEs of parameters are obtained through the use of conditional moment generating functions. We also derive confidence intervals for the parameters using these exact distributions, asymptotic distributions of the MLEs and the parametric bootstrap methods, and assess their performance through a Monte Carlo simulation study. Finally, we present two examples to illustrate all the methods of inference discussed here.  相似文献   

4.
Multiple Window Discrete Scan Statistics   总被引:1,自引:0,他引:1  
In this article, multiple scan statistics of variable window sizes are derived for independent and identically distributed 0-1 Bernoulli trials. Both one and two dimensional, as well as, conditional and unconditional cases are treated. The advantage in using multiple scan statistics, as opposed to single fixed window scan statistics, is that they are more sensitive in detecting a change in the underlying distribution of the observed data. We show how to derive simple approximations for the significance level of these testing procedures and present numerical results to evaluate their performance.  相似文献   

5.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper, a new test for the equality of the mean vectors between a two groups with the same number of the observations in high-dimensional data. The existing tests for this problem require a strong condition on the population covariance matrix. The proposed test in this paper does not require such conditions for it. This test will be obtained in a general model, that is, the data need not be normally distributed.  相似文献   

7.
In this paper, we propose robust randomized quantile regression estimators for the mean and (condition) variance functions of the popular heteroskedastic non parametric regression model. Unlike classical approaches which consider quantile as a fixed quantity, our method treats quantile as a uniformly distributed random variable. Our proposed method can be employed to estimate the error distribution, which could significantly improve prediction results. An automatic bandwidth selection scheme will be discussed. Asymptotic properties and relative efficiencies of the proposed estimators are investigated. Our empirical results show that the proposed estimators work well even for random errors with infinite variances. Various numerical simulations and two real data examples are used to demonstrate our methodologies.  相似文献   

8.
This paper presents a new measure of association. It is applicable to polytomies of either categorical or numerical type. It has the desirable property of being 0 if and only if the polytomies are independent. Its properties are studied and compared to those of existing measures. An interpretation of it is given. One situation where it is particularly useful is in measuring the ability to predict one polytomy given knowledge of the other. An example is given where the proposed measure is more relevant in describing the degree of association between two polytomies than are any of the existing measures. The corresponding sample quantity is presented and its asymptotic properties are studied. A discussion of its use in inference is given. The test for independence based on this measure is contrasted with the chi-square test.  相似文献   

9.
For the problem of discriminating between two simple hypoth¬eses concerning a Koopman - Darmois parameter, a modification of the partial sequential probability ratio test is proposed where instead of drawing only one fixed sample, two fixed samples are drawn and then Wald's SPRT is started. The OC and the ASN func¬tions are derived. Numerical comparisons are made with Wald's and Read's procedures for testing the normal mean with known variance. For some parameter values, the test procedure has a lower ASN than that of Read's procedure.  相似文献   

10.
The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.  相似文献   

11.
For the balanced two-way layout of a count response variable Y classified by fixed or random factors A and B, we address the problems of (i) testing for individual and interactive effects on Y of two fixed factors, and (ii) testing for the effect of a fixed factor in the presence of a random factor and conversely. In case (i), we assume independent Poisson responses with µij= E(Y| A=i,B=j) = αiβjγij corresponding respectively to the multiplicative

interactive and non-interactive cases. For case (ii) with factor A random, we derive a multivariate gamma-Poisson model by mixing on the random variable associated with each level of A. In each case Neyman C(α) score tests are derived. We present simulation results,and apply the interaction test to a data set, to evaluate and compare the size and power of the score test for interaction between two fixed factors, the competing Poisson-based likelihood ratio test, and the F-tests based on the assumptions that √Y+1 or log(Y+1) are approximately normal. Our results provide strong evidence that the normal-theory based F-tests typically are very far from nominal size, and that the likelihood ratio test is somewhat more liberal than the score test.  相似文献   

12.
Sequential analyses in clinical trials have ethical and economic advantages over fixed sample size methods. The sequential probability ratio test (SPRT) is a hypothesis testing procedure which evaluates data as it is collected. The original SPRT was developed by Wald for one-parameter families of distributions and later extended by Bartlett to handle the case of nuisance parameters. However, Bartlett's SPRT requires independent and identically distributed observations. In this paper we show that Bartlett's SPRT can be applied to generalized linear model (GLM) contexts. Then we propose an SPRT analysis methodology for a Poisson generalized linear mixed model (GLMM) that is suitable for our application to the design of a multicenter randomized clinical trial that compares two preventive treatments for surgical site infections. We validate the methodology with a simulation study that includes a comparison to Neyman–Pearson and Bayesian fixed sample size test designs and the Wald SPRT.  相似文献   

13.
The typical approach in change-point theory is to perform the statistical analysis based on a sample of fixed size. Alternatively, one observes some random phenomenon sequentially and takes action as soon as one observes some statistically significant deviation from the "normal" behaviour. Based on the, perhaps, more realistic situation that the process can only be partially observed, we consider the counting process related to the original process observed at equidistant time points, after which action is taken or not depending on the number of observations between those time points. In order for the procedure to stop also when everything is in order, we introduce a fixed time horizon n at which we stop declaring "no change" if the observed data did not suggest any action until then. We propose some stopping rules and consider their asymptotics under the null hypothesis as well as under alternatives. The main basis for the proofs are strong invariance principles for renewal processes and extreme value asymptotics for Gaussian processes.  相似文献   

14.
Regression analyses are commonly performed with doubly limited continuous dependent variables; for instance, when modeling the behavior of rates, proportions and income concentration indices. Several models are available in the literature for use with such variables, one of them being the unit gamma regression model. In all such models, parameter estimation is typically performed using the maximum likelihood method and testing inferences on the model''s parameters are usually based on the likelihood ratio test. Such a test can, however, deliver quite imprecise inferences when the sample size is small. In this paper, we propose two modified likelihood ratio test statistics for use with the unit gamma regressions that deliver much more accurate inferences when the number of data points in small. Numerical (i.e. simulation) evidence is presented for both fixed dispersion and varying dispersion models, and also for tests that involve nonnested models. We also present and discuss two empirical applications.  相似文献   

15.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

16.
Analyzing repeated difference tests aims in significance testing for differences as well as in estimating the mean discrimination ability of the consumers. In addition to the average success probability, the proportion of consumers that may detect the difference between two products and therefore account for any increase of this probability is of interest. While some authors address the first two goals, for the latter one only an estimator directly linked to the average probability seems to be used. However, this may lead to unreasonable results. Therefore we propose a new approach based on multiple test theory. We define a suitable set of hypotheses that is closed under intersection. From this, we derive a series of hypotheses that may be sequentially tested while the overall significance level will not be violated. By means of this procedure we may determine a minimal number of assessors that must have perceived the difference between the products at least once in a while. From this, we can find a conservative lower bound for the proportion of perceivers within the consumers. In several examples, we give some insight into the properties of this new method and show that the knowledge about this lower bound might indeed be valuable for the investigator. Finally, an adaption of this approach for similarity tests will be proposed.  相似文献   

17.
Student's t test as well as Wilcoxon's rank-sum test may be inefficient in situations where treatments bring about changes in both location and scale. In order to rectify this situation, O'Brien (1988, Journal of the American Statistical Association 83, 52-61) has proposed two new statistics, the generalized t and generalized rank-sum procedures, which may be much more powerful than their traditional counterparts in such situations. Recently, however, Blair and Morel (1991, Statistics in Medicine in press) have shown that referencing these new statistics to standard F tables as recommended by O'Brien results in inflations of Type I errors. This paper provides tables of critical values which do not produce such inflations. Use of these new critical values results in Type I error rates near nominal levels for the generalized t statistic and slightly conservative rates for the generalized rank-sum test. In addition to the critical values, some new power results are given for the generalized tests.  相似文献   

18.
We present results that extend an existing test of equality of correlation matrices. A new test statistic is proposed and is shown to be asymptotically distributed as a linear combination of independent x 2 random variables. This new formulation allows us to find the power of the existing test and our extensions by deriving the distribution under the alternative using a linear combination of independent non-central x 2 random variables. We also investigate the null and the alternative distribution of two related statistics. The first one is a quadratic form in deviations from a control group with which the remaining k-1 groups are to be compared. The second test is designed for comparing adjacent groups. Several approximations for the null and the alternative distribution are considered and two illustrative examples are provided.  相似文献   

19.
The detection of (structural) breaks or the so called change point problem has drawn increasing attention from the theoretical, applied economic and financial fields. Much of the existing research concentrates on the detection of change points and asymptotic properties of their estimators in panels when N, the number of panels, as well as T, the number of observations in each panel are large. In this paper we pursue a different approach, i.e., we consider the asymptotic properties when N→∞ while keeping T fixed. This situation is typically related to large (firm-level) data containing financial information about an immense number of firms/stocks across a limited number of years/quarters/months. We propose a general approach for testing for break(s) in this setup. In particular, we obtain the asymptotic behavior of test statistics. We also propose a wild bootstrap procedure that could be used to generate the critical values of the test statistics. The theoretical approach is supplemented by numerous simulations and by an empirical illustration. We demonstrate that the testing procedure works well in the framework of the four factors CAPM model. In particular, we estimate the breaks in the monthly returns of US mutual funds during the period January 2006 to February 2010 which covers the subprime crises.  相似文献   

20.
In planning a study, the choice of sample size may depend on a variance value based on speculation or obtained from an earlier study. Scientists may wish to use an internal pilot design to protect themselves against an incorrect choice of variance. Such a design involves collecting a portion of the originally planned sample and using it to produce a new variance estimate. This leads to a new power analysis and increasing or decreasing sample size. For any general linear univariate model, with fixed predictors and Gaussian errors, we prove that the uncorrected fixed sample F-statistic is the likelihood ratio test statistic. However, the statistic does not follow an F distribution. Ignoring the discrepancy may inflate test size. We derive and evaluate properties of the components of the likelihood ratio test statistic in order to characterize and quantify the bias. Most notably, the fixed sample size variance estimate becomes biased downward. The bias may inflate test size for any hypothesis test, even if the parameter being tested was not involved in the sample size re-estimation. Furthermore, using fixed sample size methods may create biased confidence intervals for secondary parameters and the variance estimate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号