首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman-Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power.  相似文献   

2.
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in Tsodikov [2003. Semiparametric models: a generalized self-consistency approach. J. Roy. Statist. Soc. Ser. B Statist. Methodology 65(3), 759–774] and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker [1994. The Multinomial–Poisson transformation. The Statist. 43, 495–504] to develop the Multinomial–Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers [Lang, J., 1996. On the comparison of multinomial and Poisson log-linear models. J. Roy. Statist. Soc. Ser. B Statist. Methodology 58, 253–266] justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial “missing data” (fake mixture). Lack of probabilistic interpretation at the “complete-data” level makes the use of the generalized self-consistency machinery essential.  相似文献   

3.
Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test procedures reveal that nominal money stock is essentially I(2). Long-run price homogeneity is supported by the data and imposed on the system. It is found that the bond rate is weakly exogenous for the long-run parameters and therefore act as a driving trend. Using the nonstationarity property of the data, “excess money” is estimated and its effect on the other determinants of the system is investigated. In particular, it is found that “excess money” has no effect on price inflation.  相似文献   

4.
This paper considers computation of fitted values and marginal effects in the Box-Cox regression model. Two methods, 1 the “smearing” technique suggested by Duan (see Ref. [10]) and 2 direct numerical integration, are examined and compared with the “naive” method often used in econometrics.  相似文献   

5.
In a panel data model with fixed individual effects, a number of alternative transformations are available to eliminate these effects such that the slope parameters can be estimated from ordinary least squares on transformed data. In this note we show that each transformation leads to algebraically the same estimator if the transformed data are used efficiently (i.e. if GLS is applied). If OLS is used, however, differences may occur and the routinely computed variances, even after degrees of freedom correction, are incorrect. In addition, it may matter whether “redundant” observations are used or not.  相似文献   

6.
Four basic strands in the disequilibrium literature are identified. Some examples are discussed and the canonical econometric disequilibrium model and its estimation are dealt with in detail. Specific criticisms of the canonical model,dealing with price and wage rigidity, with the nature of the min condition and the price-adjustment equation, are considered and a variety of modifications is entertained. Tests of the “equilibrium vs. disequilibrium” hypothesis are discussed, as well as several classes of models that may switch between equilibrium and disequilibrium modes. Finally, consideration is given to multimarket disequilibrium models with particular emphasis on the problems of coherence and estimation.  相似文献   

7.
Estimation of variance based on a ranked set sample   总被引:3,自引:0,他引:3  
In this paper we examine the problem of the estimation of the variance σ2 of a population based on a ranked set sample (RSS) from a nonparametric point of view. It is well known that based on a single cycle RSS, there does not exist an unbiased estimate of σ2. We show that for more than one cycle, it is possible to construct a class of quadratic unbiased estimates of σ2 in both balanced and unbalanced cases. Moreover, a minimum variance unbiased quadratic nonnegative estimate of σ2 within a certain class of quadratic estimates is derived.  相似文献   

8.
A compendium to information theory in economics and econometrics   总被引:5,自引:0,他引:5  
  相似文献   

9.
Hahn [Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica 66:315-331] derived the semiparametric efficiency bounds for estimating the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The variance of ATET depends on whether the propensity score is known or unknown. Hahn attributes this to “dimension reduction.” In this paper, an alternative explanation is given: Knowledge of the propensity score improves upon the estimation of the distribution of the confounding variables.  相似文献   

10.
Super-saturated designs in which the number of factors under investigation exceeds the number of experimental runs have been suggested for screening experiments initiated to identify important factors for future study. Most of the designs suggested in the literature are based on natural but ad hoc criteria. The “average s2” criteria introduced by Booth and Cox (Technometrics 4 (1962) 489) is a popular choice. Here, a decision theoretic approach is pursued leading to an optimality criterion based on misclassification probabilities in a Bayesian model. In certain cases, designs optimal under the average s2 criterion are also optimal for the new criterion. Necessary conditions for this to occur are presented. In addition, the new criterion often provides a strict preference between designs tied under the average s2 criterion, which is advantageous in numerical search as it reduces the number of local minima.  相似文献   

11.
In this paper we consider a Bayesian nonparametric approach to the analysis of discrete-time queueing models. The main motivation consists in applications to telecommunications, and in particular to asynchronous transfer mode (ATM) systems. Attention is focused on the posterior distribution of the overflow rate. Since the exact distribution of such a quantity is not available in a closed form, an approximation based on “proper” Bayesian bootstrap is proposed, and its properties are studied. Some possible alternatives to proper Bayesian bootstrap are also discussed. Finally, an application to real data is provided.  相似文献   

12.
We examine the risk of a pre-test estimator for regression coefficients after a pre-test for homoskedasticity under the Balanced Loss Function (BLF). We show analytically that the two stage Aitken estimator is dominated by the pre-test estimator with the critical value of unity, even if the BLF is used. We also show numerically that both the two stage Aitken estimator and the pre-test estimator can be dominated by the ordinary least squares estimator when “goodness of fit” is regarded as more important than precision of estimation.  相似文献   

13.
Gibbs sampling has had great success in the analysis of mixture models. In particular, the “latent variable” formulation of the mixture model greatly reduces computational complexity. However, one failing of this approach is the possible existence of almost-absorbing states, called trapping states, as it may require an enormous number of iterations to escape from these states. Here we examine an alternative approach to estimation in mixture models, one based on a Rao–Blackwellization argument applied to a latent-variable-based estimator. From this derivation we construct an alternative Monte Carlo sampling scheme that avoids trapping states.  相似文献   

14.
A reference prior and corresponding reference posteriors are derived for a basic Normal variance components model with two components. Different parameterizations are considered, in particular one in terms of a shrinkage or smoothing parameter. Earlier results for the one-way ANOVA setting are generalized and a broad range of applications of the general results is indicated. Numerical examples of application to spline smoothing are given for illustration and the results compared with other well-known techniques considered to be “non-informative” about the smoothing parameter.  相似文献   

15.
The Volatility of Realized Volatility   总被引:4,自引:1,他引:3  
In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of “observable” or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P 500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.  相似文献   

16.
This paper is concerned with joint tests of non-nested models and simultaneous departures from homoskedasticity, serial independence and normality of the disturbance terms. Locally equivalent alternative models are used to construct joint tests since they provide a convenient way to incorporate more than one type of departure from the classical conditions. The joint tests represent a simple asymptotic solution to the “pre-testing” problem in the context of non-nested linear regression models. Our simulation results indicate that the proposed tests have good finite sample properties.  相似文献   

17.
Stated preference choice experiments are routinely used in many areas from marketing to medicine. While results on the optimal choice sets to present for the forced choice setting have been determined in a variety of situations, no results have appeared to date on the optimal choice sets to use when either all choice sets are to contain a common base alternative or when all choice sets contain a “none of these” option. These problems are considered in this paper.  相似文献   

18.
We obtain semiparametric efficiency bounds for estimation of a location parameter in a time series model where the innovations are stationary and ergodic conditionally symmetric martingale differences but otherwise possess general dependence and distributions of unknown form. We then describe an iterative estimator that achieves this bound when the conditional density functions of the sample are known. Finally, we develop a “semi-adaptive” estimator that achieves the bound when these densities are unknown by the investigator. This estimator employs nonparametric kernel estimates of the densities. Monte Carlo results are reported.  相似文献   

19.
We implement profile empirical likelihood-based inference for censored median regression models. Inference for any specified subvector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set.  相似文献   

20.
Many economic variables are fractionally integrated of order d, FI(d) with unequal d's. For modeling their long-run equilibria, we explain why the usual cointegration fails to exist and the unit root type tests have low power. Hence, we propose a looser concept called “tie integration”. A new numerical minimization problem reveals the value of d in the absence of tie integration, denoted by dnull. We use the d from residuals of a regression, as well as, dnull to devise a new index called strength of tie (SOT). An application quantifies market responsiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号