首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article proposes consistent nonparametric methods for testing the null hypothesis of Lorenz dominance. The methods are based on a class of statistical functionals defined over the difference between the Lorenz curves for two samples of welfare-related variables. We present two specific test statistics belonging to the general class and derive their asymptotic properties. As the limiting distributions of the test statistics are nonstandard, we propose and justify bootstrap methods of inference. We provide methods appropriate for case where the two samples are independent as well as the case where the two samples represent different measures of welfare for one set of individuals. The small sample performance of the two tests is examined and compared in the context of a Monte Carlo study and an empirical analysis of income and consumption inequality.  相似文献   

2.
When carrying out data analysis, a practitioner has to decide on a suitable test for hypothesis testing, and as such, would look for a test that has a high relative power. Tests for paired data tests are usually conducted using t-test, Wilcoxon signed-rank test or the sign test. Some adaptive tests have also been suggested in the literature by O'Gorman, who found that no single member of that family performed well for all sample sizes and different tail weights, and hence, he recommended that choice of a member of that family be made depending on both the sample size and the tail weight. In this paper, we propose a new adaptive test. Simulation studies for n=25 and n=50 show that it works well for nearly all tail weights ranging from the light-tailed beta and uniform distributions to t(4) distributions. More precisely, our test has both robustness of level (in keeping the empirical levels close to the nominal level) and efficiency of power. The results of our study contribute to the area of statistical inference.  相似文献   

3.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   

4.
In this paper, we investigate the empirical distribution and the statistical properties of maximum likelihood (ML) unit-root t-statistics computed from data sampled from a first-order autoregressive (AR) process with level-dependent conditional heteroskedasticity (LDCH). This issue is of particular importance for applications on interest rate time-series. Unfortunately, the extent of the technical complexity related associated to LDCH patterns does not offer a feasible theoretical analysis, and there is no formal knowledge about the finite-sample size and power behaviour or the ML test for this context. Our analysis provides valuable guidelines for applied work and directions for future work.  相似文献   

5.
In this paper, we have reviewed 25 test procedures that are widely reported in the literature for testing the hypothesis of homogeneity of variances under various experimental conditions. Since a theoretical comparison was not possible, a simulation study has been conducted to compare the performance of the test statistics in terms of robustness and empirical power. Monte Carlo simulation was performed for various symmetric and skewed distributions, number of groups, sample size per group, degree of group size inequalities, and degree of variance heterogeneity. Using simulation results and based on the robustness and power of the tests, some promising test statistics are recommended for practitioners.  相似文献   

6.
This article considered several test statistics for testing the population signal-to-noise ratio based on parametric, nonparametric, and modified methods. To compare the performance of the proposed test statistics, a simulation study has been conducted under both symmetric and skewed distributions. The performance of the test statistics is compared based on the empirical size and power of the test. It is evident for large sample that some of our proposed test statistics are performing better in the sense of high power and have been recommended for the researchers.  相似文献   

7.
In this paper, first we consider the problem of testing that two unknown distributions are identical against the alternative that one is more IFRA than the other and propose a new test that is asymptotically normal and consistent. Next, we prove that beta family of distributions is ordered according to more IFRA ordering. The empirical power of the proposed test is simulated for some specific families of distributions like beta, gamma and Weibull that are ordered with respect to more IFRA order. Finally, we apply our test to some real data sets in the context of reliability.  相似文献   

8.
The statistical inference problem on effect size indices is addressed using a series of independent two-armed experiments from k arbitrary populations. The effect size parameter simply quantifies the difference between two groups. It is a meaningful index to be used when data are measured on different scales. In the context of bivariate statistical models, we define estimators of the effect size indices and propose large sample testing procedures to test the homogeneity of these indices. The null and non-null distributions of the proposed testing procedures are derived and their performance is evaluated via Monte Carlo simulation. Further, three types of interval estimation of the proposed indices are considered for both combined and uncombined data. Lower and upper confidence limits for the actual effect size indices are obtained and compared via bootstrapping. It is found that the length of the intervals based on the combined effect size estimator are almost half the length of the intervals based on the uncombined effect size estimators. Finally, we illustrate the proposed procedures for hypothesis testing and interval estimation using a real data set.  相似文献   

9.
Many multivariate statistical procedures are based on the assumption of normality and different approaches have been proposed for testing this assumption. The vast majority of these tests, however, are exclusively designed for cases when the sample size n is larger than the dimension of the variable p, and the null distributions of their test statistics are usually derived under the asymptotic case when p is fixed and n increases. In this article, a test that utilizes principal components to test for nonnormality is proposed for cases when p/nc. The power and size of the test are examined through Monte Carlo simulations, and it is argued that the test remains well behaved and consistent against most nonnormal distributions under this type of asymptotics.  相似文献   

10.
Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this paper, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass and Wasserman (1995), we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF's asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test.  相似文献   

11.
In this article, tests are developed which can be used to investigate the goodness-of-fit of the skew-normal distribution in the context most relevant to the data analyst, namely that in which the parameter values are unknown and are estimated from the data. We consider five test statistics chosen from the broad Cramér–von Mises and Kolmogorov–Smirnov families, based on measures of disparity between the distribution function of a fitted skew-normal population and the empirical distribution function. The sampling distributions of the proposed test statistics are approximated using Monte Carlo techniques and summarized in easy to use tabular form. We also present results obtained from simulation studies designed to explore the true size of the tests and their power against various asymmetric alternative distributions.  相似文献   

12.
In this article, we consider a linear signed rank test for non-nested distributions in the context of the model selection. Introducing a new test, we show that, it is asymptotically more efficient than the Vuong test and the test statistic based on B statistic introduced by Clarke. However, here, we let the magnitude of the data give a better performance to the test statistic. We have shown that this test is an unbiased one. The results of simulations show that the rank test has the greater statistical power than the Vuong test where the underline distributions is symmetric.  相似文献   

13.
In the context of modern portfolio theory, we compare the out-of-sample performance of eight investment strategies which are based on statistical methods with the out-of-sample performance of a family of trivial strategies. A?wide range of approaches is considered in this work, including the traditional sample-based approach, several minimum-variance techniques, a shrinkage, and a minimax approach. In contrast to similar studies in the literature, we also consider short-selling constraints and a risk-free asset. We provide a way to extend the concept of minimum-variance strategies in the context of short-selling constraints. A main drawback of most empirical studies on that topic is the use of simple testing procedures which do not account for the effects of multiple testing. For that reason we conduct several hypothesis tests which are proposed in the multiple-testing literature. We test whether it is possible to beat a trivial strategy by at least one of the non-trivial strategies, whether the trivial strategy is better than every non-trivial strategy, and which of the non-trivial strategies is significantly outperformed by naive diversification. The empirical part of our study is conducted using US stock returns from the last four decades, obtained via the CRSP database.  相似文献   

14.
This paper describes a statistical method for estimating data envelopment analysis (DEA) score confidence intervals for individual organizations or other entities. This method applies statistical panel data analysis, which provides proven and powerful methodologies for diagnostic testing and for estimation of confidence intervals. DEA scores are tested for violations of the standard statistical assumptions including contemporaneous correlation, serial correlation, heteroskedasticity and the absence of a normal distribution. Generalized least squares statistical models are used to adjust for violations that are present and to estimate valid confidence intervals within which the true efficiency of each individual decision-making unit occurs. This method is illustrated with two sets of panel data, one from large US urban transit systems and the other from a group of US hospital pharmacies.  相似文献   

15.
Powerful entropy-based tests for normality, uniformity and exponentiality have been well addressed in the statistical literature. The density-based empirical likelihood approach improves the performance of these tests for goodness-of-fit, forming them into approximate likelihood ratios. This method is extended to develop two-sample empirical likelihood approximations to optimal parametric likelihood ratios, resulting in an efficient test based on samples entropy. The proposed and examined distribution-free two-sample test is shown to be very competitive with well-known nonparametric tests. For example, the new test has high and stable power detecting a nonconstant shift in the two-sample problem, when Wilcoxon’s test may break down completely. This is partly due to the inherent structure developed within Neyman-Pearson type lemmas. The outputs of an extensive Monte Carlo analysis and real data example support our theoretical results. The Monte Carlo simulation study indicates that the proposed test compares favorably with the standard procedures, for a wide range of null and alternative distributions.  相似文献   

16.
We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood approach. We treat the problems of existence and characterization of the divergence projections of probability distributions on sets of signed finite measures. We give a precise characterization of duality, for the proposed class of estimates and test statistics, which is used to derive their limiting distributions (including the EL estimate and the EL ratio statistic) both under the null hypotheses and under alternatives or misspecification. An approximation to the power function is deduced as well as the sample size which ensures a desired power for a given alternative.  相似文献   

17.
This article makes two contributions. First, we outline a simple simulation-based framework for constructing conditional distributions for multifactor and multidimensional diffusion processes, for the case where the functional form of the conditional density is unknown. The distributions can be used, for example, to form predictive confidence intervals for time period t + τ, given information up to period t. Second, we use the simulation-based approach to construct a test for the correct specification of a diffusion process. The suggested test is in the spirit of the conditional Kolmogorov test of Andrews. However, in the present context the null conditional distribution is unknown and is replaced by its simulated counterpart. The limiting distribution of the test statistic is not nuisance parameter-free. In light of this, asymptotically valid critical values are obtained via appropriate use of the block bootstrap. The suggested test has power against a larger class of alternatives than tests that are constructed using marginal distributions/densities. The findings of a small Monte Carlo experiment underscore the good finite sample properties of the proposed test, and an empirical illustration underscores the ease with which the proposed simulation and testing methodology can be applied.  相似文献   

18.
In connection with assessing how an ongoing development in fisheries management may change fishing activity, evaluation of Total Factor Productivity (TFP) change over a period, including efficiency, scale and technology changes, is an important tool. The Malmquist index, based on distance functions evaluated with Data Envelopment Analysis (DEA), is often employed to estimate TFP changes. DEA is generally gaining attention for evaluating efficiency and capacity in fisheries. One main criticism of DEA is that it does not have any statistical foundation, i.e. that it is not possible to make inference about DEA scores or related parameters. The bootstrap method for estimating confidence intervals of deterministic parameters can however be applied to estimate confidence intervals for DEA scores. This method is applied in the present paper for assessing TFP changes between 1987 and 1999 for the fleet of Danish seiners operating in the North Sea and the Skagerrak.  相似文献   

19.
ABSTRACT

The impact of class size on student achievement remains an open question despite hundreds of empirical studies and the perception among parents, teachers, and policymakers that larger classes are a significant detriment to student development. This study sheds new light on this ambiguity by utilizing nonparametric tests for stochastic dominance to analyze unconditional and conditional test score distributions across students facing different class sizes. Analyzing the conditional distributions of test scores (purged of observables using class-size specific returns), we find that there is little causal effect of marginal reductions in class size on test scores within the range of 20 or more students. However, reductions in class size from above 20 students to below 20 students, as well as marginal reductions in classes with fewer than 20 students, increase test scores for students below the median, but decrease test scores above the median. This nonuniform impact of class size suggests that compensatory school policies, whereby lower-performing students are placed in smaller classes and higher-performing students are placed in larger classes, improves the academic achievement of not just the lower-performing students but also the higher-performing students.  相似文献   

20.
Results are given of an empirical power study of three statistical procedures for testing for exponentiality of several independent samples. The test procedures are the Tiku (1974) test, a multi-sample Durbin (1975) test, and a multi-sample Shapiro–Wilk (1972) test. The alternative distributions considered in the study were selected from the gamma, Weibull, Lomax, lognormal, inverse Gaussian, and Burr families of positively skewed distributions. The general behavior of the conditional mean exceedance function is used to classify each alternative distribution. It is shown that Tiku's test generally exhibits overall greater power than either of the other two test procedures. For certain alternative distributions, Shapiro–Wilk's test is superior when the sample sizes are small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号