首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We study estimation and inference in settings where the interest is in the effect of a potentially endogenous regressor on some outcome. To address the endogeneity, we exploit the presence of additional variables. Like conventional instrumental variables, these variables are correlated with the endogenous regressor. However, unlike conventional instrumental variables, they also have direct effects on the outcome, and thus are “invalid” instruments. Our novel identifying assumption is that the direct effects of these invalid instruments are uncorrelated with the effects of the instruments on the endogenous regressor. We show that in this case the limited-information-maximum-likelihood (liml) estimator is no longer consistent, but that a modification of the bias-corrected two-stage-least-square (tsls) estimator is consistent. We also show that conventional tests for over-identifying restrictions, adapted to the many instruments setting, can be used to test for the presence of these direct effects. We recommend that empirical researchers carry out such tests and compare estimates based on liml and the modified version of bias-corrected tsls. We illustrate in the context of two applications that such practice can be illuminating, and that our novel identifying assumption has substantive empirical content.  相似文献   

2.
Anderson and his collaborators have made seminal contributions to inference with instrumental variables and to dynamic panel data models. We review these contributions and the extensive economic and statistical literature that these contributions spawned. We describe our recent work in these two areas, presenting new approaches to (a) making valid inferences in the presence of weak instruments and (b) instrument and model selection for dynamic panel data models. Both approaches use empirical likelihood and resampling. For inference in the presence of weak instruments, our approach uses model averaging to achieve asymptotic efficiency with strong instruments but maintain valid inferences with weak instruments. For instrument and model selection, our approach aims at choosing valid instruments that are strong enough to be useful.  相似文献   

3.
We show that the ordinary least squares (OLS) and fixed-effects (FE) estimators of the popular difference-in-differences model may deviate when there is time-varying panel non-response. If such non-response does not affect the common-trend assumption, then OLS and FE are consistent, but OLS is more precise. However, if non-response is affecting the common-trend assumption, then FE estimation may still be consistent, while OLS will be inconsistent. We provide simulation as well as empirical evidence for this phenomenon to occur. We conclude that in case of unbalanced panels, deviating OLS and FE estimates should be considered as evidence that non-response is not ignorable for the differences-in-differences estimation.  相似文献   

4.
A family of threshold nonlinear generalised autoregressive conditionally heteroscedastic models is considered, that allows smooth transitions between regimes, capturing size asymmetry via an exponential smooth transition function. A Bayesian approach is taken and an efficient adaptive sampling scheme is employed for inference, including a novel extension to a recently proposed prior for the smoothing parameter that solves a likelihood identification problem. A simulation study illustrates that the sampling scheme performs well, with the chosen prior kept close to uninformative, while successfully ensuring identification of model parameters and accurate inference for the smoothing parameter. An empirical study confirms the potential suitability of the model, highlighting the presence of both mean and volatility (size) asymmetry; while the model is favoured over modern, popular model competitors, including those with sign asymmetry, via the deviance information criterion.  相似文献   

5.
In this article, we study a nonparametric approach regarding a general nonlinear reduced form equation to achieve a better approximation of the optimal instrument. Accordingly, we propose the nonparametric additive instrumental variable estimator (NAIVE) with the adaptive group Lasso. We theoretically demonstrate that the proposed estimator is root-n consistent and asymptotically normal. The adaptive group Lasso helps us select the valid instruments while the dimensionality of potential instrumental variables is allowed to be greater than the sample size. In practice, the degree and knots of B-spline series are selected by minimizing the BIC or EBIC criteria for each nonparametric additive component in the reduced form equation. In Monte Carlo simulations, we show that the NAIVE has the same performance as the linear instrumental variable (IV) estimator for the truly linear reduced form equation. On the other hand, the NAIVE performs much better in terms of bias and mean squared errors compared to other alternative estimators under the high-dimensional nonlinear reduced form equation. We further illustrate our method in an empirical study of international trade and growth. Our findings provide a stronger evidence that international trade has a significant positive effect on economic growth.  相似文献   

6.
A Bayesian mixture model for differential gene expression   总被引:3,自引:0,他引:3  
Summary.  We propose model-based inference for differential gene expression, using a nonparametric Bayesian probability model for the distribution of gene intensities under various conditions. The probability model is a mixture of normal distributions. The resulting inference is similar to a popular empirical Bayes approach that is used for the same inference problem. The use of fully model-based inference mitigates some of the necessary limitations of the empirical Bayes method. We argue that inference is no more difficult than posterior simulation in traditional nonparametric mixture-of-normal models. The approach proposed is motivated by a microarray experiment that was carried out to identify genes that are differentially expressed between normal tissue and colon cancer tissue samples. Additionally, we carried out a small simulation study to verify the methods proposed. In the motivating case-studies we show how the nonparametric Bayes approach facilitates the evaluation of posterior expected false discovery rates. We also show how inference can proceed even in the absence of a null sample of known non-differentially expressed scores. This highlights the difference from alternative empirical Bayes approaches that are based on plug-in estimates.  相似文献   

7.
《Econometric Reviews》2012,31(1):27-53
Abstract

Transformed diffusions (TDs) have become increasingly popular in financial modeling for their model flexibility and tractability. While existing TD models are predominately one-factor models, empirical evidence often prefers models with multiple factors. We propose a novel distribution-driven nonlinear multifactor TD model with latent components. Our model is a transformation of a underlying multivariate Ornstein–Uhlenbeck (MVOU) process, where the transformation function is endogenously specified by a flexible parametric stationary distribution of the observed variable. Computationally efficient exact likelihood inference can be implemented for our model using a modified Kalman filter algorithm and the transformed affine structure also allows us to price derivatives in semi-closed form. We compare the proposed multifactor model with existing TD models for modeling VIX and pricing VIX futures. Our results show that the proposed model outperforms all existing TD models both in the sample and out of the sample consistently across all categories and scenarios of our comparison.  相似文献   

8.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   

9.
This paper is concerned with statistical inference for partially nonlinear models. Empirical likelihood method for parameter in nonlinear function and nonparametric function is investigated. The empirical log-likelihood ratios are shown to be asymptotically chi-square and then the corresponding confidence intervals are constructed. By the empirical likelihood ratio functions, we also obtain the maximum empirical likelihood estimators of the parameter in nonlinear function and nonparametric function, and prove the asymptotic normality. A simulation study indicates that, compared with normal approximation-based method and the bootstrap method, the empirical likelihood method performs better in terms of coverage probabilities and average length/widths of confidence intervals/bands. An application to a real dataset is illustrated.  相似文献   

10.
We consider an extension of the recursive bivariate probit model for estimating the effect of a binary variable on a binary outcome in the presence of unobserved confounders, nonlinear covariate effects and overdispersion. Specifically, the model consists of a system of two binary outcomes with a binary endogenous regressor which includes smooth functions of covariates, hence allowing for flexible functional dependence of the responses on the continuous regressors, and arbitrary random intercepts to deal with overdispersion arising from correlated observations on clusters or from the omission of non‐confounding covariates. We fit the model by maximizing a penalized likelihood using an Expectation‐Maximisation algorithm. The issues of automatic multiple smoothing parameter selection and inference are also addressed. The empirical properties of the proposed algorithm are examined in a simulation study. The method is then illustrated using data from a survey on health, aging and wealth.  相似文献   

11.
文章利用中国证券市场的日内交易数据实证了非参数ACD模型。非参数ACD模型不依赖条件均值的函数形式和误差项的分布形式,更具有一般意义。文章从多个方面进行实证分析。利用非参数方法进行分析的结果表明:数据不能用线性ACD模型来刻画,根据非参数拟合曲面的形状可以把此ACD模型的函数形式设定为某种非线性形式。  相似文献   

12.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   

13.
This article considers testing the significance of a regressor with a near unit root in a predictive regression model. The procedures discussed in this article are nonparametric, so one can test the significance of a regressor without specifying a functional form. The results are used to test the null hypothesis that the entire function takes the value of zero. We show that the standardized test has a normal distribution regardless of whether there is a near unit root in the regressor. This is in contrast to tests based on linear regression for this model where tests have a nonstandard limiting distribution that depends on nuisance parameters. Our results have practical implications in testing the significance of a regressor since there is no need to conduct pretests for a unit root in the regressor and the same procedure can be used if the regressor has a unit root or not. A Monte Carlo experiment explores the performance of the test for various levels of persistence of the regressors and for various linear and nonlinear alternatives. The test has superior performance against certain nonlinear alternatives. An application of the test applied to stock returns shows how the test can improve inference about predictability.  相似文献   

14.
In this paper, we consider the statistical inference for the varying-coefficient partially nonlinear model with additive measurement errors in the nonparametric part. The local bias-corrected profile nonlinear least-squares estimation procedure for parameter in nonlinear function and nonparametric function is proposed. Then, the asymptotic normality properties of the resulting estimators are established. With the empirical likelihood method, a local bias-corrected empirical log-likelihood ratio statistic for the unknown parameter, and a corrected and residual adjusted empirical log-likelihood ratio for the nonparametric component are constructed. It is shown that the resulting statistics are asymptotically chi-square distribution under some suitable conditions. Some simulations are conducted to evaluate the performance of the proposed methods. The results indicate that the empirical likelihood method is superior to the profile nonlinear least-squares method in terms of the confidence regions of parameter and point-wise confidence intervals of nonparametric function.  相似文献   

15.
Abstract.  We propose an easy to implement method for making small sample parametric inference about the root of an estimating equation expressible as a quadratic form in normal random variables. It is based on saddlepoint approximations to the distribution of the estimating equation whose unique root is a parameter's maximum likelihood estimator (MLE), while substituting conditional MLEs for the remaining (nuisance) parameters. Monotoncity of the estimating equation in its parameter argument enables us to relate these approximations to those for the estimator of interest. The proposed method is equivalent to a parametric bootstrap percentile approach where Monte Carlo simulation is replaced by saddlepoint approximation. It finds applications in many areas of statistics including, nonlinear regression, time series analysis, inference on ratios of regression parameters in linear models and calibration. We demonstrate the method in the context of some classical examples from nonlinear regression models and ratios of regression parameter problems. Simulation results for these show that the proposed method, apart from being generally easier to implement, yields confidence intervals with lengths and coverage probabilities that compare favourably with those obtained from several competing methods proposed in the literature over the past half-century.  相似文献   

16.
In this paper, we introduce the empirical likelihood (EL) method to longitudinal studies. By considering the dependence within subjects in the auxiliary random vectors, we propose a new weighted empirical likelihood (WEL) inference for generalized linear models with longitudinal data. We show that the weighted empirical likelihood ratio always follows an asymptotically standard chi-squared distribution no matter which working weight matrix that we have chosen, but a well chosen working weight matrix can improve the efficiency of statistical inference. Simulations are conducted to demonstrate the accuracy and efficiency of our proposed WEL method, and a real data set is used to illustrate the proposed method.  相似文献   

17.
The problem of estimating standard errors for diagnostic accuracy measures might be challenging for many complicated models. We can address such a problem by using the Bootstrap methods to blunt its technical edge with resampled empirical distributions. We consider two cases where bootstrap methods can successfully improve our knowledge of the sampling variability of the diagnostic accuracy estimators. The first application is to make inference for the area under the ROC curve resulted from a functional logistic regression model which is a sophisticated modelling device to describe the relationship between a dichotomous response and multiple covariates. We consider using this regression method to model the predictive effects of multiple independent variables on the occurrence of a disease. The accuracy measures, such as the area under the ROC curve (AUC) are developed from the functional regression. Asymptotical results for the empirical estimators are provided to facilitate inferences. The second application is to test the difference of two weighted areas under the ROC curve (WAUC) from a paired two sample study. The correlation between the two WAUC complicates the asymptotic distribution of the test statistic. We then employ the bootstrap methods to gain satisfactory inference results. Simulations and examples are supplied in this article to confirm the merits of the bootstrap methods.  相似文献   

18.
Abstract

This article investigates the asymptotic properties of a simple empirical-likelihood-based inference method for discontinuity in density. The parameter of interest is a function of two one-sided limits of the probability density function at (possibly) two cut-off points. Our approach is based on the first-order conditions from a minimum contrast problem. We investigate both first-order and second-order properties of the proposed method. We characterize the leading coverage error of our inference method and propose a coverage-error-optimal (CE-optimal, hereafter) bandwidth selector. We show that the empirical likelihood ratio statistic is Bartlett correctable. An important special case is the manipulation testing problem in a regression discontinuity design (RDD), where the parameter of interest is the density difference at a known threshold. In RDD, the continuity of the density of the assignment variable at the threshold is considered as a “no-manipulation” behavioral assumption, which is a testable implication of an identifying condition for the local average treatment effect. When specialized to the manipulation testing problem, the CE-optimal bandwidth selector has an explicit form. We propose a data-driven CE-optimal bandwidth selector for use in practice. Results from Monte Carlo simulations are presented. Usefulness of our method is illustrated by an empirical example.  相似文献   

19.
This paper presents the empirical likelihood inferences for a class of varying-coefficient models with error-prone covariates. We focus on the case that the covariance matrix of the measurement errors is unknown and neither repeated measurements nor validation data are available. We propose an instrumental variable-based empirical likelihood inference method and show that the proposed empirical log-likelihood ratio is asymptotically chi-squared. Then, the confidence intervals for the varying-coefficient functions are constructed. Some simulation studies and a real data application are used to assess the finite sample performance of the proposed empirical likelihood procedure.  相似文献   

20.
Belief propagation (BP) has been applied in a variety of inference problems as an approximation tool. BP does not necessarily converge in loopy graphs, and even if it does, is not guaranteed to provide exact inference. Even so, BP is useful in many applications due to its computational tractability. In this article, we investigate a regularized BP scheme by focusing on loopy Markov graphs (MGs) induced by a multivariate Gaussian distribution in canonical form. There is a rich literature surrounding BP on Gaussian MGs (labelled Gaussian belief propagation or GaBP), and this is known to experience the same problems as general BP on graphs. GaBP is known to provide the correct marginal means if it converges (this is not guaranteed), but it does not provide the exact marginal precisions. We show that our adjusted BP will always converge, with sufficient tuning, while maintaining the exact marginal means. As a further contribution we show, in an empirical study, that our GaBP variant can accelerate GaBP and compares well with other GaBP-type competitors in terms of convergence speed and accuracy of approximate marginal precisions. These improvements suggest that the principle of regularized BP should be investigated in other inference problems. The selection of the degree of regularization is addressed through the use of two heuristics. A by-product of GaBP is that it can be used to solve linear systems of equations; the same is true for our variant and we make an empirical comparison with the conjugate gradient method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号