首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
The exponential family structure of the joint distribution of generalized order statistics is utilized to establish multivariate tests on the model parameters. For simple and composite null hypotheses, the likelihood ratio test (LR test), Wald's test, and Rao's score test are derived and turn out to have simple representations. The asymptotic distribution of the corresponding test statistics under the null hypothesis is stated, and, in case of a simple null hypothesis, asymptotic optimality of the LR test is addressed. Applications of the tests are presented; in particular, we discuss their use in reliability, and to decide whether a Poisson process is homogeneous. Finally, a power study is performed to measure and compare the quality of the tests for both, simple and composite null hypotheses.  相似文献   

3.
In this article, we propose two test statistics for testing the underlying serial correlation in a partially linear single-index model Y = η(Z τα) + X τβ + ? when X is measured with additive error. The proposed test statistics are shown to have asymptotic normal or chi-squared distributions under the null hypothesis of no serial correlation. Monte Carlo experiments are also conducted to illustrate the finite sample performance of the proposed test statistics. The simulation results confirm that these statistics perform satisfactorily in both estimated sizes and powers.  相似文献   

4.
We consider a 2r factorial experiment with at least two replicates. Our aim is to find a confidence interval for θ, a specified linear combination of the regression parameters (for the model written as a regression, with factor levels coded as ?1 and 1). We suppose that preliminary hypothesis tests are carried out sequentially, beginning with the rth‐order interaction. After these preliminary hypothesis tests, a confidence interval for θ with nominal coverage 1 ?α is constructed under the assumption that the selected model had been given to us a priori. We describe a new efficient Monte Carlo method, which employs conditioning for variance reduction, for estimating the minimum coverage probability of the resulting confidence interval. The application of this method is demonstrated in the context of a 23 factorial experiment with two replicates and a particular contrast θ of interest. The preliminary hypothesis tests consist of the following two‐step procedure. We first test the null hypothesis that the third‐order interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero and proceed to the second step; otherwise, we stop. In the second step, for each of the second‐order interactions we test the null hypothesis that the interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero. The resulting confidence interval, with nominal coverage probability 0.95, has a minimum coverage probability that is, to a good approximation, 0.464. This shows that this confidence interval is completely inadequate.  相似文献   

5.
This paper considers the likelihood ratio (LR) tests of stationarity, common trends and cointegration for multivariate time series. As the distribution of these tests is not known, a bootstrap version is proposed via a state- space representation. The bootstrap samples are obtained from the Kalman filter innovations under the null hypothesis. Monte Carlo simulations for the Gaussian univariate random walk plus noise model show that the bootstrap LR test achieves higher power for medium-sized deviations from the null hypothesis than a locally optimal and one-sided Lagrange Multiplier (LM) test that has a known asymptotic distribution. The power gains of the bootstrap LR test are significantly larger for testing the hypothesis of common trends and cointegration in multivariate time series, as the alternative asymptotic procedure – obtained as an extension of the LM test of stationarity – does not possess properties of optimality. Finally, it is shown that the (pseudo-)LR tests maintain good size and power properties also for the non-Gaussian series. An empirical illustration is provided.  相似文献   

6.
Abstract

Through simulation and regression, we study the alternative distribution of the likelihood ratio test in which the null hypothesis postulates that the data are from a normal distribution after a restricted Box–Cox transformation and the alternative hypothesis postulates that they are from a mixture of two normals after a restricted (possibly different) Box–Cox transformation. The number of observations in the sample is called N. The standardized distance between components (after transformation) is D = (μ2 ? μ1)/σ, where μ1 and μ2 are the component means and σ2 is their common variance. One component contains the fraction π of observed, and the other 1 ? π. The simulation results demonstrate a dependence of power on the mixing proportion, with power decreasing as the mixing proportion differs from 0.5. The alternative distribution appears to be a non-central chi-squared with approximately 2.48 + 10N ?0.75 degrees of freedom and non-centrality parameter 0.174N(D ? 1.4)2 × [π(1 ? π)]. At least 900 observations are needed to have power 95% for a 5% test when D = 2. For fixed values of D, power, and significance level, substantially more observations are necessary when π ≥ 0.90 or π ≤ 0.10. We give the estimated powers for the alternatives studied and a table of sample sizes needed for 50%, 80%, 90%, and 95% power.  相似文献   

7.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

8.
In multiple hypothesis test, an important problem is estimating the proportion of true null hypotheses. Existing methods are mainly based on the p-values of the single tests. In this paper, we propose two new estimations for this proportion. One is a natural extension of the commonly used methods based on p-values and the other is based on a mixed distribution. Simulations show that the first method is comparable with existing methods and performs better under some cases. And the method based on a mixed distribution can get accurate estimators even if the variance of data is large or the difference between the null hypothesis and alternative hypothesis is very small.  相似文献   

9.
We develop a Bayesian procedure for the homogeneity testing problem of r populations using r × s contingency tables. The posterior probability of the homogeneity null hypothesis is calculated using a mixed prior distribution. The methodology consists of choosing an appropriate value of π0 for the mass assigned to the null and spreading the remainder, 1 ? π0, over the alternative according to a density function. With this method, a theorem which shows when the same conclusion is reached from both frequentist and Bayesian points of view is obtained. A sufficient condition under which the p-value is less than a value α and the posterior probability is also less than 0.5 is provided.  相似文献   

10.
It is shown that the limiting distribution of the augmented Dickey–Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR(∞) process assumption typically imposed. In essence, all that is required is that the error process driving the random walk possesses a continuous spectral density that is strictly positive. Furthermore, under the same weak assumptions, the limiting distribution of the ADF test is derived under the alternative of stationarity, and a theoretical explanation is given for the well-known empirical fact that the test's power is a decreasing function of the chosen autoregressive order p. The intuitive reason for the reduced power of the ADF test is that, as p tends to infinity, the p regressors become asymptotically collinear.  相似文献   

11.
12.
In this paper, we study the problem of testing the hypothesis on whether the density f of a random variable on a sphere belongs to a given parametric class of densities. We propose two test statistics based on the L2 and L1 distances between a non‐parametric density estimator adapted to circular data and a smoothed version of the specified density. The asymptotic distribution of the L2 test statistic is provided under the null hypothesis and contiguous alternatives. We also consider a bootstrap method to approximate the distribution of both test statistics. Through a simulation study, we explore the moderate sample performance of the proposed tests under the null hypothesis and under different alternatives. Finally, the procedure is illustrated by analysing a real data set based on wind direction measurements.  相似文献   

13.
It is shown that the exact null distribution of the likelihood ratio criterion for sphericity test in the p-variate normal case and the marginal distribution of the first component of a (p ? 1)-variate generalized Dirichlet model with a given set of parameters are identical. The exact distribution of the likelihood ratio criterion so obtained has a general format for every p. A novel idea is introduced here through which the complicated exact null distribution of the sphericity test criterion in multivariate statistical analysis is converted into an easily tractable marginal density in a generalized Dirichlet model. It provides a direct and easiest method of computation of p-values. The computation of p-values and a table of critical points corresponding to p = 3 and 4 are also presented.  相似文献   

14.
The likelihood-ratio test (LRT) is considered as a goodness-of-fit test for the null hypothesis that several distribution functions are uniformly stochastically ordered. Under the null hypothesis, H1 : F1 ? F2 ?···? FN, the asymptotic distribution of the LRT statistic is a convolution of several chi-bar-square distributions each of which depends upon the location parameter. The least-favourable parameter configuration for the LRT is not unique. It can be two different types and depends on the number of distributions, the number of intervals and the significance level α. This testing method is illustrated with a data set of survival times of five groups of male fruit flies.  相似文献   

15.
Bayesian counterparts of some standard tests concerning the means of multi-normal distribution are discussed. In particular the hypothesis that the multi-normal mean is equal to a specified value, and the hypothesis that the means are equal. Lower bounds on the Bayes factor in favour of the null hypothesis are obtained over the class of conjugate priors. The P-value, or observed significance level of the standard sampling-theoretic test procedure are compared with the posterior probability. The results correspond closely with those of Good (1967), Berger & Sellke (1987), Pepple (1988) and others and illustrate the conflict between posterior probabilities and P-values as measures of evidence.  相似文献   

16.
Often for a non-regular parametric hypothesis, a tractable test statistic involves a nuisance parameter. A common practice is to replace the unknown nuisance parameter by its estimator. The validality of such a replacement can only be justified for an infinite sample in the sense that under appropriate conditions the asymptotic distribution of the statistic under the null hypothesis is unchanged when the nuisance parameter is replaced by its estimator (Crowder M.J. 1990. Biometrika 77: 499–506). We propose a bootstrap method to calibrate the error incurred in the significance level, for finite samples, due to the replacement. Further, we have proved that the bootstrap method provides a more accurate estimator for the unknown actual significance level than the nominal level. Simulations demonstrate the proposed methodology.  相似文献   

17.
In this paper we propose test statistics for a general hypothesis concerning the adequacy of multivariate random-effects covariance structures in a multivariate growth curve model with differing numbers of random effects (Lange, N., N.M. Laird, J. Amer. Statist. Assoc. 84 (1989) 241–247). Since the exact likelihood ratio (LR) statistic for the hypothesis is complicated, it is suggested to use a modified LR statistic. An asymptotic expansion of the null distribution of the statistic is obtained. The exact LR statistic is also discussed.  相似文献   

18.
Usual tests for trends stand under null hypothesis. This article presents a test of non null hypothesis for linear trends in proportions. A weighted least squares method is used to estimate the regression coefficient of proportions. A non null hypothesis is defined as its expectation equal to a prescribed regression coefficient margin. Its variance is used to construct an equation of basic relationship for linear trends in proportions along the asymptotic normal method. Then follow derivations for the sample size formula, the power function, and the test statistic. The expected power is obtained from the power function and the observed power is exhibited by Monte Carlo method. It reduces to the classical test for linear trends in proportions on setting the margin equal to zero. The agreement between the expected and the observed power is excellent. It is the non null hypothesis test matched with the classical test and can be applied to assess the clinical significance of trends among several proportions. By contrast, the classical test is restricted in testing the statistical significance. A set of data from a website is used to illustrate the methodology.  相似文献   

19.
Summary.  It is well known that in a sequential study the probability that the likelihood ratio for a simple alternative hypothesis H 1 versus a simple null hypothesis H 0 will ever be greater than a positive constant c will not exceed 1/ c under H 0. However, for a composite alternative hypothesis, this bound of 1/ c will no longer hold when a generalized likelihood ratio statistic is used. We consider a stepwise likelihood ratio statistic which, for each new observation, is updated by cumulatively multiplying the ratio of the conditional likelihoods for the composite alternative hypothesis evaluated at an estimate of the parameter obtained from the preceding observations versus the simple null hypothesis. We show that, under the null hypothesis, the probability that this stepwise likelihood ratio will ever be greater than c will not exceed 1/ c . In contrast, under the composite alternative hypothesis, this ratio will generally converge in probability to ∞. These results suggest that a stepwise likelihood ratio statistic can be useful in a sequential study for testing a composite alternative versus a simple null hypothesis. For illustration, we conduct two simulation studies, one for a normal response and one for an exponential response, to compare the performance of a sequential test based on a stepwise likelihood ratio statistic with a constant boundary versus some existing approaches.  相似文献   

20.
We develop testing procedures which detect if the observed time series is a martingale difference sequence. Furthermore, tests are developed that detect change–points in the conditional expectation of the series given its past. The test statistics are formulated following the approach of Fourier–type conditional expectations first proposed by Bierens (1982 Bierens, H. J. (1982). Consistent model speci?cation tests. J. Econometr. 20:105134.[Crossref], [Web of Science ®] [Google Scholar]) and have the advantage of computational simplicity. The limit behavior of the test statistics is investigated under the null hypothesis as well as under alternatives. Since the asymptotic null distribution contains unknown parameters, a bootstrap procedure is proposed in order to actually perform the test. The performance of the bootstrap version of the test is compared in finite samples with other methods for the same problem. A real–data application is also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号