首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Comment     
Abstract

A semiparametric estimator for evaluating the parameters of data generated under a sample selection process is developed. This estimator is based on the generalized maximum entropy estimator and performs well for small and ill-posed samples. Theoretical and sampling comparisons with parametric and semiparametric estimators are given. This method and standard ones are applied to three small-sample empirical applications of the wage-participation model for female teenage heads of households, immigrants, and Native Americans.  相似文献   

2.
Hay and Olsen (1984) incorrectly argue that a multi-part model, the two-part model used in Duan et al. (1982,1983), is nested within the sample-selection model. Their proof relies on an unmentioned restrictive assumption that cannot be satisfied. We provide a counterexample to show that the propensity to use medical care and the level of expense can be positively associated in the two-part model, contrary to their assertion. The conditional specification in the multi-part model is preferable to the unconditional specification in the selection model for modeling actual (v. potential) outcomes. The selection model also has poor statistical and numerical properties and relies on untestable assumptions. Empirically the multi-part estimators perform as well as or better than the sample selection estimator for the data set analyzed in Duan et al. (1982, 1983).  相似文献   

3.
This article assesses the small-sample properties of generalized-method-of-moments-based Wald statistics by using (a) a vector white-noise process and (b) an equilibrium business-cycle model as the data-generating mechanisms. In many cases, the small-sample size of the Wald tests exceeds its asymptotic size and increases sharply with the number of hypotheses being jointly tested. We argue that this is mostly due to difficulty in estimating the spectral-density matrix of the residuals. Estimators of this matrix that impose restrictions implied by the model or the null hypothesis substantially improve the properties of the Wald statistics.  相似文献   

4.
Entropy-based goodness-of-fit test statistics can be established by estimating the entropy difference or Kullback–Leibler information, and several entropy-based test statistics based on various entropy estimators have been proposed. In this article, we first give comments on some problems resulting from not satisfying the moment constraints. We then study the choice of the entropy estimator by noting the reason why a test based on a better entropy estimator does not necessarily provide better powers.  相似文献   

5.
《Econometric Reviews》2013,32(2):175-194
ABSTRACT

Under a sample selection or non-response problem, where a response variable y is observed only when a condition δ = 1 is met, the identified mean E(y|δ = 1) is not equal to the desired mean E(y). But the monotonicity condition E(y|δ = 1) ≤ E(y|δ = 0) yields an informative bound E(y|δ = 1) ≤ E(y), which is enough for certain inferences. For example, in a majority voting with δ being the vote-turnout, it is enough to know if E(y) > 0.5 or not, for which E(y|δ = 1) > 0.5 is sufficient under the monotonicity. The main question is then whether the monotonicity condition is testable, and if not, when it is plausible. Answering to these queries, when there is a ‘proxy’ variable z related to y but fully observed, we provide a test for the monotonicity; when z is not available, we provide primitive conditions and plausible models for the monotonicity. Going further, when both y and z are binary, bivariate monotonicities of the type P(y, z|δ = 1) ≤ P(y, z|δ = 0) are considered, which can lead to sharper bounds for P(y). As an empirical example, a data set on the 1996 U.S. presidential election is analyzed to see if the Republican candidate could have won had everybody voted, i.e., to see if P(y) > 0.5, where y = 1 is voting for the Republican candidate.  相似文献   

6.
This article proposes a fast approximation for the small sample bias correction of the iterated bootstrap. The approximation adapts existing fast approximation techniques of the bootstrap p-value and quantile functions to the problem of estimating the bias function. We show an optimality result which holds under general conditions not requiring an asymptotic pivot. Monte Carlo evidence, from the linear instrumental variable model and the nonlinear GMM, suggest that in addition to its computational appeal and success in reducing the mean and median bias in identified models, the fast approximation provides scope for bias reduction in weakly identified configurations.  相似文献   

7.
For multivariate regression with a symmetric disturbance distribution, the error in the least absolute residuals estimator is approximately multivariate normally distributed with mean zero and variance matrix λ2(X′X)?1, where X is the matrix of K explanatory variables and T observations, and λ 2/T is the variance of the median of a sample of size T from the disturbance distribution. The approximate sampling theory is validated by extensive Monte Carlo studies, and some directions of possible refinement emerge.  相似文献   

8.
ABSTRACT

We develop Markov chain Monte Carlo algorithms for estimating the parameters of the short-term interest rate model. Using Monte Carlo experiments we compare the Bayes estimators with the maximum likelihood and generalized method of moments estimators. We estimate the model using the Japanese overnight call rate data.  相似文献   

9.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   

10.
Sample entropy (SaEn) was recently developed to quantify the amount of regularity in data. However, the computation of this feature in an online application is infeasible. In this work, we examine a heuristic approach using permuted limited number of samples to estimate SaEn and discuss estimation variability in this context. We conclude that the computation of permuted SaEn provides a fair estimate of time series regularity that can be used in online applications.  相似文献   

11.
There has been significant new work published recently on the subject of model selection. Notably Rissanen (1986, 1987, 1988) has introduced new criteria based on the notion of stochastic complexity and Hurvich and Tsai(1989) have introduced a bias corrected version of Akaike's information criterion. In this paper, a Monte Carlo study is conducted to evaluate the relative performance of these new model selection criteria against the commonly used alternatives. In addition, we compare the performance of all the criteria in a number of situations not considered in earlier studies: robustness to distributional assumptions, collinearity among regressors, and non-stationarity in a time series. The evaluation is based on the number of times the correct model is chosen and the out of sample prediction error. The results of this study suggest that Rissanen's criteria are sensitive to the assumptions and choices that need to made in their application, and so are sometimes unreliable. While many of the criteria often perform satisfactorily, across experiments the Schwartz Bayesian Information Criterion (and the related Bayesian Estimation Criterion of Geweke-Meese) seem to consistently outperfom the other alternatives considered.  相似文献   

12.
By means of a Monte Carlo study it is investigated whether moments of the asymptotic distributions of two estimators for the errors-in-variables model are appropriate for employment in small-sample applications.  相似文献   

13.
There has been significant new work published recently on the subject of model selection. Notably Rissanen (1986, 1987, 1988) has introduced new criteria based on the notion of stochastic complexity and Hurvich and Tsai(1989) have introduced a bias corrected version of Akaike's information criterion. In this paper, a Monte Carlo study is conducted to evaluate the relative performance of these new model selection criteria against the commonly used alternatives. In addition, we compare the performance of all the criteria in a number of situations not considered in earlier studies: robustness to distributional assumptions, collinearity among regressors, and non-stationarity in a time series. The evaluation is based on the number of times the correct model is chosen and the out of sample prediction error. The results of this study suggest that Rissanen's criteria are sensitive to the assumptions and choices that need to made in their application, and so are sometimes unreliable. While many of the criteria often perform satisfactorily, across experiments the Schwartz Bayesian Information Criterion (and the related Bayesian Estimation Criterion of Geweke-Meese) seem to consistently outperfom the other alternatives considered.  相似文献   

14.
《Econometric Reviews》2013,32(1):25-52
Abstract

This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   

15.
ABSTRACT

The literature on spurious regressions has found that the t-statistic for testing the null of no relationship between two independent variables diverges asymptotically under a wide variety of non stationary data-generating processes for the dependent and explanatory variables. This paper introduces a simple method which guarantees convergence of this t-statistic to a pivotal limit distribution, thus allowing asymptotic inference. This method can be used to distinguish a genuine relationship from a spurious one among integrated processes. We apply the proposed procedure to several pairs of apparently independent integrated variables, and find that our procedure does not find (spurious) significant relationships.  相似文献   

16.
汪卢俊 《统计研究》2014,31(7):85-91
LSTAR模型的单位根检验往往易忽视其条件方差的时变性,实际上,对许多经济变量尤其是金融变量建立LSTAR模型后,经常发现其条件方差存在GARCH效应。针对LSTAR-GARCH模型的平稳性检验,本文构建了检验统计量tNG,之后在极大似然估计的基础上,推导出tNG的渐近分布,通过蒙特卡洛模拟方法得到该统计量的渐近临界值,并在此基础上研究了tNG检验的检验功效。在与刘雪燕和张晓峒(2009)提出的tNL检验、Ling等(2003)提出的tLG检验以及DF单位根检验进行比较后,发现tNG检验具备明显优势。  相似文献   

17.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

18.
贝叶斯动态模型及其预测理论具有广泛的应用性,如在通信,控制,人工智能,经济管理,气象预报等领域。本文简要介绍了贝叶斯动态模型,对于非线性贝叶斯动态模型提出了SIS算法及其在处理非线性模型中应用。  相似文献   

19.
In this paper, the finite sample properties of the maximum likelihood and Bayesian estimators of the half-normal stochastic frontier production function are analyzed and compared through a Monte Carlo study. The results show that the Bayesian estimator should be used in preference to the maximum likelihood owing to the fact that the mean square error performance is substantially better in the Bayesian framework.  相似文献   

20.
The parameters and quantiles of the three-parameter generalized Pareto distribution (GPD3) were estimated using six methods for Monte Carlo generated samples. The parameter estimators were the moment estimator and its two variants, probability-weighted moment estimator, maximum likelihood estimator, and entropy estimator. Parameters were investigated using a factorial experiment. The performance of these estimators was statistically compared, with the objective of identifying the most robust estimator from amongst them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号