首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate the asymptotic theory for U-statistics based on sample spacings, i.e. the gaps between successive observations. The usual asymptotic theory for U-statistics does not apply here because spacings are dependent variables. However, under the null hypothesis, the uniform spacings can be expressed as conditionally independent Exponential random variables. We exploit this idea to derive the relevant asymptotic theory both under the null hypothesis and under a sequence of close alternatives.The generalized Gini mean difference of the sample spacings is a prime example of a U-statistic of this type. We show that such a Gini spacings test is analogous to Rao's spacings test. We find the asymptotically locally most powerful test in this class, and it has the same efficacy as the Greenwood statistic.  相似文献   

2.
A studentized range test is proposed to test the hypothesis of bioequivalence of normal means in terms of a standardized distance among means. A least favourable configuration (LFC) of means to guarantee the maximum level at a null hypothesis and an LFC of means to guarantee the minimum power at an alternative hypothesis are obtained. This level and power of the test are fully independent of the unknown means and variances. For a given level, the critical value of the test under a null hypothesis can be determined. Furthermore, if the power under an alternative is also required at a given level, then both the critical value and the required sample size for an experiment can be simultaneously determined. In situations where the common population variance is unknown and the bioequivalence is the actual distance between means without standardization, a two-stage sampling procedure can be employed to find these solutions.  相似文献   

3.
Abstract. Results are given which provide bounds for controlled direct effects when nounmeasured confounding assumptions required for the identification of these effects do not hold. Previous results concerning bounds for controlled direct effects rely on monotonicity relationships between the treatment, mediator and the outcome themselves; the results presented in this article instead assume that monotonicity relationships hold between the unmeasured confounding variable or variables and the treatment, mediator and outcome. Whereas prior results give bounds that contain the null hypothesis of no direct effect, the results presented here will in many instances yield bounds that do not contain the null hypothesis of no direct effect. For contexts in which a set of variables intercepts all paths between a treatment and an outcome, it is possible to provide a definition for a controlled mediated effect. We discuss the identification of these controlled mediated effects; the bounds for controlled direct effects are applicable also to controlled mediated effects. An example is given to illustrate how the results in the article can be used to draw inferences about direct and mediated effects in the presence of unmeasured confounding variables.  相似文献   

4.
This paper explores testing procedures with response-related incomplete data, with particular attention centered to pseudolikelihood ratio tests. We construct pseudolikelihood functions with the biased observations supplemented by auxiliary information, without specifying the association between the primary variables and the auxiliary variables. The asymptotic distributions of the test statistics under the null hypothesis are derived and finite-sample properties of the testing procedures are examined via simulation. The methodology is illustrated with an example involving evaluation of kindergarten readiness skills in children with sickle cell disease.  相似文献   

5.
Abstract

In this paper, we introduce a version of Hayter and Tsui's statistical test with double sampling for the vector mean of a population under multivariate normal assumption. A study showed that this new test was more or as efficient than the well-known Hotelling's T2 with double sampling. Some nice features of Hayter and Tsui's test are its simplicity of implementation and its capability of identifying the errant variables when the null hypothesis is rejected. Taking that into consideration, a new control chart called HTDS is also introduced as a tool to monitor multivariate process vector mean when using double sampling.  相似文献   

6.
Summary.  The isolation of DNA markers that are linked to interesting genes helps plant breeders to select parent plants that transmit useful traits to future generations. Such 'marker-assisted breeding and selection' heavily leans on statistical testing of associations between markers and a well-chosen trait. Statistical association analysis is guided by classical p -values or the false discovery rate and thus relies predominantly on the null hypothesis. The main concern of plant breeders, however, is to avoid missing an important alternative. To judge evidence from this perspective, we complement the traditional p -value with a one-sided 'alternative p -value' which summarizes evidence against a target alternative in the direction of the null hypothesis. This p -value measures 'impotence' as opposed to significance: how likely is it to observe an outcome as extreme as or more extreme than the one that was observed when data stem from the alternative? We show how a graphical inspection of both p -values can guide marker selection when the null and the alternative hypotheses have a comparable importance. We derive formal decision tools with balanced properties yielding different rejection regions for different markers. We apply our approach to study rye-grass plants.  相似文献   

7.
Some comments are made concerning the possible forms of a correlation coefficient type goodness-of-fit statistic, and their relationship with other goodness-of-fit statistics, Critical values for a correlation goodness-of-fit statistic and for the Cramer-von Mises statistic are provided for testing a completely-specified null hypothesis for both complete and censored sampling, Critical values for a correlation test statistic are provided for complete and censored sampling for testing the hypothesis of normality, two parameter exponentiality, Weibull (or, extreme value) and an exponential-power distribution, respectively. Critical values are also provided for a test of one-parameter exponentiality based on the Cramer-von Mises statistic  相似文献   

8.
The bivariate probability distribution of the random variables [number of inversions] and [number of outstanding variables] in a sequence of n i.i.d. random variables is derived. As an application, the null covariance between the test statistics proposed by Mann and Brunk, respectively, for the ‘trend in location’ problem is obtained. It is shown that these test statistics are asymptotically uncorrelated under the null hypothesis.  相似文献   

9.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

10.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   

11.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

12.
In this article, a simple algorithm is used to maximize a family of optimal statistics for hypothesis testing with a nuisance parameter not defined under the null hypothesis. This arises from genetic linkage and association studies and other hypothesis testing problems. The maximum of optimal statistics over the nuisance parameter space can be used as a robust test in this situation. Here, we use the maximum and minimum statistics to examine the sensitivity of testing results with respect to the unknown nuisance parameter. Examples from genetic linkage analysis using affected sub pairs and a candidate-gene association study in case-parents trio design are studied.  相似文献   

13.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

14.
We consider testing of the significance of the coefficients in the linear model. Unlike in the classical approach, there is no alternative hypothesis to accept when the null hypothesis is rejected. When there is a substantial deviation from the null hypothesis, we reject the null hypothesis and identify based on data alternative hypotheses associated with the independent variables or the levels that contributed most towards the deviation from the null hypothesis.  相似文献   

15.
We considered binomial distributed random variables whose parameters are unknown and some of those parameters need to be estimated. We studied the maximum likelihood ratio test and the maximally selected χ2-test to detect if there is a change in the distributions among the random variables. Their limit distributions under the null hypothesis and their asymptotic distributions under the alternative hypothesis were obtained when the number of the observations is fixed. We discussed the properties of the limit distribution and found an efficient way to calculate the probability of multivariate normal random variables. Finally, those results for both tests have been applied to examples of Lindisfarne's data, the Talipes Data. Our conclusions are consistent with other researchers' findings.  相似文献   

16.
We extend four tests common in classical regression – Wald, score, likelihood ratio and F tests – to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.  相似文献   

17.
We observe s Independent samples, from unknown continuous distributions. The problem is to test the hypothesis that all the distributions are identical. The distribution of the numbers of observations from s-1 of the samples which fall in cells whose Boundaries are selected order statistics of the remaining sample, the number of cells increasing gradually with the sample sizes, is investigated. It is shown that under the null hypothesis and nearDy alternatives, as the sample sizes Increase these numbers of observations can be considered to be slightly rounded off normal random variables, the amount rounded off decreasing as sample sizes increase. Using these results, various tests of the hypothesis can be constructed and analyzed.  相似文献   

18.
In this article two-stage hierarchical Bayesian models are used for the observed occurrences of events in a rectangular region. Two Bayesian variable window scan statistics are introduced to test the null hypothesis that the observed events follow a specified two-stage hierarchical model vs an alternative that indicates a local increase in the average number of observed events in a subregion (clustering). Both procedures are based on a sequence of Bayes factors and their pp-values that have been generated via simulation of posterior samples of the parameters, under the null and alternative hypotheses. The posterior samples of the parameters have been generated by employing Gibbs sampling via introduction of auxiliary variables. Numerical results are presented to evaluate the performance of these variable window scan statistics.  相似文献   

19.
In nonparametric statistics, a hypothesis testing problem based on the ranks of the data gives rise to two separate permutation sets corresponding to the null and to the alternative hypothesis, respectively. A modification of Critchlow's unified approach to hypothesis testing is proposed. By defining the distance between permutation sets to be the average distance between pairs of permutations, one from each set, various test statistics are derived for the multi-sample location problem and the two-way layout. The asymptotic distributions of the test statistics are computed under both the null and alternative hypotheses. Some comparisons are made on the basis of the asymptotic relative efficiency.  相似文献   

20.
Summary.  In high throughput genomic work, a very large number d of hypotheses are tested based on n ≪ d data samples. The large number of tests necessitates an adjustment for false discoveries in which a true null hypothesis was rejected. The expected number of false discoveries is easy to obtain. Dependences between the hypothesis tests greatly affect the variance of the number of false discoveries. Assuming that the tests are independent gives an inadequate variance formula. The paper presents a variance formula that takes account of the correlations between test statistics. That formula involves O ( d 2) correlations, and so a naïve implementation has cost O ( nd 2). A method based on sampling pairs of tests allows the variance to be approximated at a cost that is independent of d .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号