首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
C. Ittrich 《Statistics》2013,47(1):13-42
Nonlinear regression models with spherically symmetric error vectors and a single nonlinear parameter are considered. On the basis of a new geometric approach, exact one- and two-sided tests and confidence regions for the nonlinear parameter are derived in the cases of known and unknown error variances. A geometric measure representation formula is used to determine the power functions of the tests if the error variance is known and to derive different lower bounds for the power function of a one-sided test in the case of an unknown error variance. The latter can be done quite effectively by constructing and measuring several balls inside the critical region. A numerical study compares the results for different density generating functions of the error distribution.  相似文献   

2.
The classical change point problem is considered, from the invariance point of view. Locally optimal invariant tests are derived for the change in level, when the initial level and the common variance are assumed to be unknown. The tests derived by Chernoff and Zacks (1964) and Gardner (1969), for the change in level, when variance is known, are shown to be locally optimal invariant tests.  相似文献   

3.
The problem of bandwidth selection for kernel-based estimation of the distribution function (cdf) at a given point is considered. With appropriate bandwidth, a kernel-based estimator (kdf) is known to outperform the empirical distribution function. However, such a bandwidth is unknown in practice. In pointwise estimation, the appropriate bandwidth depends on the point where the function is estimated. The existing smoothing methods use one common bandwidth to estimate the cdf. The accuracy of the resulting estimates varies substantially depending on the cdf and the point where it is estimated. We propose to select bandwidth by minimizing a bootstrap estimator of the MSE of the kdf. The resulting estimator performs reliably, irrespective of where the cdf is estimated. It is shown to be consistent under i.i.d. as well as strongly mixing dependence assumption. Two applications of the proposed estimator are shown in finance and seismology. We report a dataset on the S & P Nifty index values.  相似文献   

4.
The asymptotic distribution theory of test statistics which are functions of spacings is studied here. Distribution theory under appropriate close alternatives is also derived and used to find the locally most powerful spacing tests. For the two-sample problem, which is to test if two independent samples are from the same population, test statistics which are based on “spacing-frequencies” (i.e., the numbers of observations of one sample which fall in between the spacings made by the other sample) are utilized. The general asymptotic distribution theory of such statistics is studied both under the null hypothesis and under a sequence of close alternatives.  相似文献   

5.
This paper studies well-known tests by Kim et?al. (J Econom 109:389?C392, 2002) and Busetti and Taylor (J Econom 123:33?C66, 2004) for the null hypothesis of short memory against a change to nonstationarity, I (1). The potential break point is not assumed to be known but estimated from the data. First, we show that the tests are also applicable for a change from I (0) to a fractional order of integration I (d) with d?>?0 (long memory) in that the tests are consistent. The rates of divergence of the test statistics are derived as functions of the sample size T and d. Second, we compare their finite sample power experimentally. Third, we consider break point estimation for a change from I (0) to I (d) for finite samples in computer simulations. It turns out that the estimators proposed for the integer case (d?=?1) are practically reliable only if d is close enough to 1.  相似文献   

6.
For classification problems where the test data are labeled sequentially, the point at which all true positives are first identified is often of critical importance. This article develops hypothesis tests to assess whether all true positives have been labeled in the test data. The tests use a partial receiver operating characteristic (ROC) that is generated from a labeled subset of the test data. These methods are developed in the context of unexploded ordnance (UXO) classification, but are applicable to any binary classification problem. First, the likelihood of the observed ROC given binormal model parameters is derived using order statistics, leading to a nonlinear parameter estimation problem. I then derive the approximate distribution of the point on the ROC at which all true instances are found. Using estimated binormal parameters, this distribution can be integrated up to a desired confidence level to define a critical false alarm rate (FAR). If the selected operating point is before this critical point, then additional labels out to the critical point are required. A second test uses the uncertainty in binormal parameters to determine the critical FAR. These tests are demonstrated with UXO classification examples and both approaches are recommended for testing operating points.  相似文献   

7.
Recent literature in empirical finance has strongly considered the possibility of varying coefficient specifications and has utilised sequences of tests to select a particular varying coefficient model for the data.This paper explores the use of a sequence of point optimal tests in this context and finds power to be the driving factor in correct model selection.  相似文献   

8.
In this paper, we consider testing the location parameter with multilevel (or hierarchical) data. A general family of weighted test statistics is introduced. This family includes extensions to the case of multilevel data of familiar procedures like the t, the sign and the Wilcoxon signed-rank tests. Under mild assumptions, the test statistics have a null limiting normal distribution which facilitates their use. An investigation of the relative merits of selected members of the family of tests is achieved theoretically by deriving their asymptotic relative efficiency (ARE) and empirically via a simulation study. It is shown that the performance of a test depends on the clusters configurations and on the intracluster correlations. Explicit formulas for optimal weights and a discussion of the impact of omitting a level are provided for 2 and 3-level data. It is shown that using appropriate weights can greatly improve the performance of the tests. Finally, the use of the new tests is illustrated with a real data example.  相似文献   

9.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

10.
This paper presents a proportional-reduction-in-impurity (PRI) measure for categorical association, that employs application-dependent loss functions which make the measure widely applicable. The well-known proportional -reduction-in-error (PRE) measure is shown to be a special case of the new PRI measure. Moreover, the asymptotic variance of the maximum likelihood estimator (MLE) of the measure is derived to facilitate its use for statistical inference. An extension of the PRI measure to compositional association is made to show that it can have a variety of applications. Selected loss functions are treated to illustrate the derivation of the measure.  相似文献   

11.
The point triserial correlation coefficient is defined and, under appropriate order restrictions, an exact test that this correlation coefficient equals zero is developed. The power function of that test is derived and partially tabulated. The general problem of testing for homogeneity of means under ordered alternatives is discussed. The available procedures for performing such tests are considered, are seen to provide alternative approaches to the test developed herein, and are compared with that test. An exact test for the equality of dependent point triserial correlation coefficients is described through application of a procedure suggested by Wolfe ‘1976’  相似文献   

12.
Approximations to the power functions of the likelihood ratio tests of homogeneity of normal means against the simple loop ordering at slippage alternatives are considered. If a researcher knows which mean is smallest and which is largest, but does not know how the other means are ordered, then a simple loop ordering is appropriate. The accuracy of the several moment approximations are studied for the case of known variances and it is found that for powers in the range typically of interest, the two-moment approximation seems quite adequate. Approximations based on mixtures of noncentral F variables are developed for the case of unknown variances. The critical values of the test statistics are also tabulated for selected levels of significance.  相似文献   

13.
A simulation comparison is done of Mann–Whitney U test extensions recently proposed for simple cluster samples or for repeated ordinal responses. These are based on two approaches: the permutation approach of Fay and Gennings (four tests, two exact), and Edwardes’ approach (two asymptotic tests, one new). Edwardes’ approach permits confidence interval estimation, unlike the permutation approach. An appropriate parameter for estimation is P(X<Y)−P(X>Y), where X is the rank of a response from group 1 and Y is from group 2. The permutation tests are shown to be unsuitable for some survey data, since they are sensitive to a difference in cluster intra-correlations when there is no distribution difference between groups at the individual level. The exact permutation tests are of little use for less than seven clusters, precisely where they are most needed. Otherwise, the permutation tests perform well.  相似文献   

14.
Adaptive design is widely used in clinical trials. In this paper, we consider the problem of estimating the mean of the selected normal population in two-stage adaptive designs. Under the LINEX and L2 loss functions, admissibility and minimax results are derived for some location invariant estimators of the selected normal mean. The naive sample mean estimator is shown to be inadmissible under the LINEX loss function and to be not minimax under both loss functions.  相似文献   

15.
A new family of statistics is proposed to test for the presence of serial correlation in linear regression models. The tests are based on partial sums of lagged cross-products of regression residuals that define a class of interesting Gaussian processes. These processes are characterized in terms of regressor functions, the serial-correlation structure, the distribution of the noise process, and the order of the lag of the cross-products of residuals. It is shown that these four factors affect the lagged residual processes independently. Large-sample distributional results are presented for test statistics under the null hypothesis of no serial correlation or for alternatives from a range of interesting hypotheses. Some indication of the circumstances to which the asymptotic results apply in finite-sample situations and of those to which they should be applied with some caution are obtained through a simulation study. Tables of selected quantiles of the proposed tests are also given. The tests are illustrated with two examples taken from the empirical literature. It is also proposed that plots of lagged residual processes be used as diagnostic tools to gain insight into the correlation structure of residuals derived from regression fits.  相似文献   

16.
For a sample taken from an i.i.d. sequence of Poisson point processes with not necessarily finite unknown intensity measure the arithmetic mean is shown to be an estimator which is consistent uniformly on certain classes of functions. The method is a reduction to the case of finite intensity measure, which in turn can be dealt with using empirical process methods. A functional central limit theorem is also established in this context.  相似文献   

17.
For many applications involving compositional data, it is necessary to establish a valid measure of distance, yet when essential zeros are present traditional distance measures are problematic. In quantitative fatty acid signature analysis (QFASA), compositional diet estimates are produced that often contain many zeros. In order to test for a difference in diet between two populations of predators using the QFASA diet estimates, a legitimate measure of distance for use in the test statistic is necessary. Since ecologists using QFASA must first select the potential species of prey in the predator's diet, the chosen measure of distance should be such that the distance between samples does not decrease as the number of species considered increases, a property known in general as subcompositional coherence. In this paper we compare three measures of distance for compositional data capable of handling zeros, but not satisfying some of the well-accepted principles of compositional data analysis. For compositional diet estimates, the most relevant of these is the property of subcompositionally coherence and we show that this property may be approximately satisfied. Based on the results of a simulation study and an application to real-life QFASA diet estimates of grey seals, we recommend the chi-square measure of distance.  相似文献   

18.
The paper considers non-parametric maximum likelihood estimation of the failure time distribution for interval-censored data subject to misclassification. Such data can arise from two types of observation scheme; either where observations continue until the first positive test result or where tests continue regardless of the test results. In the former case, the misclassification probabilities must be known, whereas in the latter case, joint estimation of the event-time distribution and misclassification probabilities is possible. The regions for which the maximum likelihood estimate can only have support are derived. Algorithms for computing the maximum likelihood estimate are investigated and it is shown that algorithms appropriate for computing non-parametric mixing distributions perform better than an iterative convex minorant algorithm in terms of time to absolute convergence. A profile likelihood approach is proposed for joint estimation. The methods are illustrated on a data set relating to the onset of cardiac allograft vasculopathy in post-heart-transplantation patients.  相似文献   

19.
ABSTRACT

This article considers the problem of testing equality of parameters of two exponential distributions having common known coefficient of variation, both under unconditional and conditional setup. Unconditional tests based on BLUE'S and LRT are considered. Using the Conditionality Principle of Fisher, an UMP conditional test for one-sided alternative is derived by conditioning on an ancillary. This test is seen to be uniformly more powerful than unconditional tests in certain given ranges of ancillary. Simulation studies on the power functions of the tests are done for this purpose.  相似文献   

20.
Abstract.  A common statistical problem involves the testing of a K -dimensional parameter vector. In both parametric and semiparametric settings, two types of directional tests – linear combination and constrained tests – are frequently used instead of omnibus tests in hopes of achieving greater power for specific alternatives. In this paper, we consider the relationship between these directional tests, as well as their relationship to omnibus tests. Every constrained directional test is shown to be asymptotically equivalent to a specific linear combination test under a sequence of contiguous alternatives and vice versa. Even when the direction of the alternative is known, the constrained test in general will not be optimal unless the objective function used to derive it is efficient. For an arbitrary alternative, insight into the power characteristics of directional tests in comparison to omnibus tests can be gained by a chi-square partition of the omnibus test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号