首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper extends the conditional logit approach (Rasch, Andersen, Chamberlain) used in panel data models of binary variables with correlated fixed effects and strictly exogenous regressors. In a two‐period two‐state model, necessary and sufficient conditions on the joint distribution function of the individual‐and‐period specific shocks are given such that the sum of individual binary variables across time is a sufficient statistic for the individual effect. By extending a result of Chamberlain, it is shown that root‐n consistent regular estimators can be constructed in panel binary models if and only if the property of sufficiency holds. In applied work, the estimation method amounts to quasi‐differencing the binary variables as if they were continuous variables and transforming a panel data model into a cross‐section model. Semiparametric approaches can then be readily applied.  相似文献   

2.
This paper considers regression models for cross‐section data that exhibit cross‐section dependence due to common shocks, such as macroeconomic shocks. The paper analyzes the properties of least squares (LS) estimators in this context. The results of the paper allow for any form of cross‐section dependence and heterogeneity across population units. The probability limits of the LS estimators are determined, and necessary and sufficient conditions are given for consistency. The asymptotic distributions of the estimators are found to be mixed normal after recentering and scaling. The t, Wald, and F statistics are found to have asymptotic standard normal, χ2, and scaled χ2 distributions, respectively, under the null hypothesis when the conditions required for consistency of the parameter under test hold. However, the absolute values of t, Wald, and F statistics are found to diverge to infinity under the null hypothesis when these conditions fail. Confidence intervals exhibit similarly dichotomous behavior. Hence, common shocks are found to be innocuous in some circumstances, but quite problematic in others. Models with factor structures for errors and regressors are considered. Using the general results, conditions are determined under which consistency of the LS estimators holds and fails in models with factor structures. The results are extended to cover heterogeneous and functional factor structures in which common factors have different impacts on different population units.  相似文献   

3.
We consider the estimation of dynamic panel data models in the presence of incidental parameters in both dimensions: individual fixed‐effects and time fixed‐effects, as well as incidental parameters in the variances. We adopt the factor analytical approach by estimating the sample variance of individual effects rather than the effects themselves. In the presence of cross‐sectional heteroskedasticity, the factor method estimates the average of the cross‐sectional variances instead of the individual variances. The method thereby eliminates the incidental‐parameter problem in the means and in the variances over the cross‐sectional dimension. We further show that estimating the time effects and heteroskedasticities in the time dimension does not lead to the incidental‐parameter bias even when T and N are comparable. Moreover, efficient and robust estimation is obtained by jointly estimating heteroskedasticities.  相似文献   

4.
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2‐consistent in general and describe conditions under which matching estimators do attain N1/2‐consistency. Second, we show that even in settings where matching estimators are N1/2‐consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.  相似文献   

5.
This paper considers large N and large T panel data models with unobservable multiple interactive effects, which are correlated with the regressors. In earnings studies, for example, workers' motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, interactive effects represent unobservable common shocks and their heterogeneous impacts on cross sections. We consider identification, consistency, and the limiting distribution of the interactive‐effects estimator. Under both large N and large T, the estimator is shown to be consistent, which is valid in the presence of correlations and heteroskedasticities of unknown form in both dimensions. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. In addition, we consider identification and estimation of models in the presence of a grand mean, time‐invariant regressors, and common regressors. Given identification, the rate of convergence and limiting results continue to hold.  相似文献   

6.
This paper studies the estimation of dynamic discrete games of incomplete information. Two main econometric issues appear in the estimation of these models: the indeterminacy problem associated with the existence of multiple equilibria and the computational burden in the solution of the game. We propose a class of pseudo maximum likelihood (PML) estimators that deals with these problems, and we study the asymptotic and finite sample properties of several estimators in this class. We first focus on two‐step PML estimators, which, although they are attractive for their computational simplicity, have some important limitations: they are seriously biased in small samples; they require consistent nonparametric estimators of players' choice probabilities in the first step, which are not always available; and they are asymptotically inefficient. Second, we show that a recursive extension of the two‐step PML, which we call nested pseudo likelihood (NPL), addresses those drawbacks at a relatively small additional computational cost. The NPL estimator is particularly useful in applications where consistent nonparametric estimates of choice probabilities either are not available or are very imprecise, e.g., models with permanent unobserved heterogeneity. Finally, we illustrate these methods in Monte Carlo experiments and in an empirical application to a model of firm entry and exit in oligopoly markets using Chilean data from several retail industries.  相似文献   

7.
We present a simple way to estimate the effects of changes in a vector of observable variables X on a limited dependent variable Y when Y is a general nonseparable function of X and unobservables, and X is independent of the unobservables. We treat models in which Y is censored from above, below, or both. The basic idea is to first estimate the derivative of the conditional mean of Y given X at x with respect to x on the uncensored sample without correcting for the effect of x on the censored population. We then correct the derivative for the effects of the selection bias. We discuss nonparametric and semiparametric estimators for the derivative. We also discuss the cases of discrete regressors and of endogenous regressors in both cross section and panel data contexts.  相似文献   

8.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

9.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

10.
In nonlinear panel data models, the incidental parameter problem remains a challenge to econometricians. Available solutions are often based on ingenious, model‐specific methods. In this paper, we propose a systematic approach to construct moment restrictions on common parameters that are free from the individual fixed effects. This is done by an orthogonal projection that differences out the unknown distribution function of individual effects. Our method applies generally in likelihood models with continuous dependent variables where a condition of non‐surjectivity holds. The resulting method‐of‐moments estimators are root‐N consistent (for fixed T) and asymptotically normal, under regularity conditions that we spell out. Several examples and a small‐scale simulation exercise complete the paper.  相似文献   

11.
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small—possibly as small as one. The well‐known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n→∞ with m fixed are provided using a subsampling‐like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability.  相似文献   

12.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

13.
In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross‐sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.  相似文献   

14.
This paper proposes two new estimators for determining the number of factors (r) in static approximate factor models. We exploit the well‐known fact that the r largest eigenvalues of the variance matrix of N response variables grow unboundedly as N increases, while the other eigenvalues remain bounded. The new estimators are obtained simply by maximizing the ratio of two adjacent eigenvalues. Our simulation results provide promising evidence for the two estimators.  相似文献   

15.
In this paper we study identification and estimation of a correlated random coefficients (CRC) panel data model. The outcome of interest varies linearly with a vector of endogenous regressors. The coefficients on these regressors are heterogenous across units and may covary with them. We consider the average partial effect (APE) of a small change in the regressor vector on the outcome (cf. Chamberlain (1984), Wooldridge (2005a)). Chamberlain (1992) calculated the semiparametric efficiency bound for the APE in our model and proposed a √N‐consistent estimator. Nonsingularity of the APE's information bound, and hence the appropriateness of Chamberlain's (1992) estimator, requires (i) the time dimension of the panel (T) to strictly exceed the number of random coefficients (p) and (ii) strong conditions on the time series properties of the regressor vector. We demonstrate irregular identification of the APE when T = p and for more persistent regressor processes. Our approach exploits the different identifying content of the subpopulations of stayers—or units whose regressor values change little across periods—and movers—or units whose regressor values change substantially across periods. We propose a feasible estimator based on our identification result and characterize its large sample properties. While irregularity precludes our estimator from attaining parametric rates of convergence, its limiting distribution is normal and inference is straightforward to conduct. Standard software may be used to compute point estimates and standard errors. We use our methods to estimate the average elasticity of calorie consumption with respect to total outlay for a sample of poor Nicaraguan households.  相似文献   

16.
We consider the situation when there is a large number of series, N, each with T observations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factor‐augmented regressions are consistent and asymptotically normal if . The conditional mean predicted by the estimated factors is consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.  相似文献   

17.
This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC—Panel Analysis of Nonstationarity in Idiosyncratic and Common components. PANIC can detect whether the nonstationarity in a series is pervasive, or variable‐specific, or both. It can determine the number of independent stochastic trends driving the common factors. PANIC also permits valid pooling of individual statistics and thus panel tests can be constructed. A distinctive feature of PANIC is that it tests the unobserved components of the data instead of the observed series. The key to PANIC is consistent estimation of the space spanned by the unobserved common factors and the idiosyncratic errors without knowing a priori whether these are stationary or integrated processes. We provide a rigorous theory for estimation and inference and show that the tests have good finite sample properties.  相似文献   

18.
This paper analyzes the conditions under which consistent estimation can be achieved in instrumental variables (IV) regression when the available instruments are weak and the number of instruments, Kn, goes to infinity with the sample size. We show that consistent estimation depends importantly on the strength of the instruments as measured by rn, the rate of growth of the so‐called concentration parameter, and also on Kn. In particular, when Kn→∞, the concentration parameter can grow, even if each individual instrument is only weakly correlated with the endogenous explanatory variables, and consistency of certain estimators can be established under weaker conditions than have previously been assumed in the literature. Hence, the use of many weak instruments may actually improve the performance of certain point estimators. More specifically, we find that the limited information maximum likelihood (LIML) estimator and the bias‐corrected two‐stage least squares (B2SLS) estimator are consistent when , while the two‐stage least squares (2SLS) estimator is consistent only if Kn/rn→0 as n→∞. These consistency results suggest that LIML and B2SLS are more robust to instrument weakness than 2SLS.  相似文献   

19.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

20.
This paper develops estimators for quantile treatment effects under the identifying restriction that selection to treatment is based on observable characteristics. Identification is achieved without requiring computation of the conditional quantiles of the potential outcomes. Instead, the identification results for the marginal quantiles lead to an estimation procedure for the quantile treatment effect parameters that has two steps: nonparametric estimation of the propensity score and computation of the difference between the solutions of two separate minimization problems. Root‐N consistency, asymptotic normality, and achievement of the semiparametric efficiency bound are shown for that estimator. A consistent estimation procedure for the variance is also presented. Finally, the method developed here is applied to evaluation of a job training program and to a Monte Carlo exercise. Results from the empirical application indicate that the method works relatively well even for a data set with limited overlap between treated and controls in the support of covariates. The Monte Carlo study shows that, for a relatively small sample size, the method produces estimates with good precision and low bias, especially for middle quantiles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号