首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper develops an inferential theory for factor models of large dimensions. The principal components estimator is considered because it is easy to compute and is asymptotically equivalent to the maximum likelihood estimator (if normality is assumed). We derive the rate of convergence and the limiting distributions of the estimated factors, factor loadings, and common components. The theory is developed within the framework of large cross sections (N) and a large time dimension (T), to which classical factor analysis does not apply. We show that the estimated common components are asymptotically normal with a convergence rate equal to the minimum of the square roots of N and T. The estimated factors and their loadings are generally normal, although not always so. The convergence rate of the estimated factors and factor loadings can be faster than that of the estimated common components. These results are obtained under general conditions that allow for correlations and heteroskedasticities in both dimensions. Stronger results are obtained when the idiosyncratic errors are serially uncorrelated and homoskedastic. A necessary and sufficient condition for consistency is derived for large N but fixed T.  相似文献   

2.
In this paper we derive the asymptotic properties of within groups (WG), GMM, and LIML estimators for an autoregressive model with random effects when both T and N tend to infinity. GMM and LIML are consistent and asymptotically equivalent to the WG estimator. When T/N→ 0 the fixed T results for GMM and LIML remain valid, but WG, although consistent, has an asymptotic bias in its asymptotic distribution. When T/N tends to a positive constant, the WG, GMM, and LIML estimators exhibit negative asymptotic biases of order 1/T, 1/N, and 1/(2NT), respectively. In addition, the crude GMM estimator that neglects the autocorrelation in first differenced errors is inconsistent as T/Nc>0, despite being consistent for fixed T. Finally, we discuss the properties of a random effects pseudo MLE with unrestricted initial conditions when both T and N tend to infinity.  相似文献   

3.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

4.
This paper introduces a nonparametric Granger‐causality test for covariance stationary linear processes under, possibly, the presence of long‐range dependence. We show that the test is consistent and has power against contiguous alternatives converging to the parametric rate T−1/2. Since the test is based on estimates of the parameters of the representation of a VAR model as a, possibly, two‐sided infinite distributed lag model, we first show that a modification of Hannan's (1963, 1967) estimator is root‐ T consistent and asymptotically normal for the coefficients of such a representation. When the data are long‐range dependent, this method of estimation becomes more attractive than least squares, since the latter can be neither root‐ T consistent nor asymptotically normal as is the case with short‐range dependent data.  相似文献   

5.
In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross‐sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.  相似文献   

6.
For vectors z and w and scalar v, let r(v, z, w) be a function that can be nonparametrically estimated consistently and asymptotically normally, such as a distribution, density, or conditional mean regression function. We provide consistent, asymptotically normal nonparametric estimators for the functions G and H, where r(v, z, w) = H[vG(z), w], and some related models. This framework encompasses homothetic and homothetically separable functions, and transformed partly additive models r(v, z, w) = h[v + g(z), w] for unknown functions gand h Such models reduce the curse of dimensionality, provide a natural generalization of linear index models, and are widely used in utility, production, and cost function applications. We also provide an estimator of Gthat is oracle efficient, achieving the same performance as an estimator based on local least squares when H is known.  相似文献   

7.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

8.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

9.
In nonlinear panel data models, the incidental parameter problem remains a challenge to econometricians. Available solutions are often based on ingenious, model‐specific methods. In this paper, we propose a systematic approach to construct moment restrictions on common parameters that are free from the individual fixed effects. This is done by an orthogonal projection that differences out the unknown distribution function of individual effects. Our method applies generally in likelihood models with continuous dependent variables where a condition of non‐surjectivity holds. The resulting method‐of‐moments estimators are root‐N consistent (for fixed T) and asymptotically normal, under regularity conditions that we spell out. Several examples and a small‐scale simulation exercise complete the paper.  相似文献   

10.
This paper develops a generalization of the widely used difference‐in‐differences method for evaluating the effects of policy changes. We propose a model that allows the control and treatment groups to have different average benefits from the treatment. The assumptions of the proposed model are invariant to the scaling of the outcome. We provide conditions under which the model is nonparametrically identified and propose an estimator that can be applied using either repeated cross section or panel data. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean–variance trade‐off. We also propose methods for inference, showing that our estimator for the average treatment effect is root‐N consistent and asymptotically normal. We consider extensions to allow for covariates, discrete dependent variables, and multiple groups and time periods.  相似文献   

11.
In a call center, staffing decisions must be made before the call arrival rate is known with certainty. Once the arrival rate becomes known, the call center may be over‐staffed, in which case staff are being paid to be idle, or under‐staffed, in which case many callers hang‐up in the face of long wait times. Firms that have chosen to keep their call center operations in‐house can mitigate this problem by co‐sourcing; that is, by sometimes outsourcing calls. Then, the required staffing N depends on how the firm chooses which calls to outsource in real time, after the arrival rate realizes and the call center operates as a M/M/N + M queue with an outsourcing option. Our objective is to find a joint policy for staffing and call outsourcing that minimizes the long‐run average cost of this two‐stage stochastic program when there is a linear staffing cost per unit time and linear costs associated with abandonments and outsourcing. We propose a policy that uses a square‐root safety staffing rule, and outsources calls in accordance with a threshold rule that characterizes when the system is “too crowded.” Analytically, we establish that our proposed policy is asymptotically optimal, as the mean arrival rate becomes large, when the level of uncertainty in the arrival rate is of the same order as the inherent system fluctuations in the number of waiting customers for a known arrival rate. Through an extensive numerical study, we establish that our policy is extremely robust. In particular, our policy performs remarkably well over a wide range of parameters, and far beyond where it is proved to be asymptotically optimal.  相似文献   

12.
In this paper we study identification and estimation of a correlated random coefficients (CRC) panel data model. The outcome of interest varies linearly with a vector of endogenous regressors. The coefficients on these regressors are heterogenous across units and may covary with them. We consider the average partial effect (APE) of a small change in the regressor vector on the outcome (cf. Chamberlain (1984), Wooldridge (2005a)). Chamberlain (1992) calculated the semiparametric efficiency bound for the APE in our model and proposed a √N‐consistent estimator. Nonsingularity of the APE's information bound, and hence the appropriateness of Chamberlain's (1992) estimator, requires (i) the time dimension of the panel (T) to strictly exceed the number of random coefficients (p) and (ii) strong conditions on the time series properties of the regressor vector. We demonstrate irregular identification of the APE when T = p and for more persistent regressor processes. Our approach exploits the different identifying content of the subpopulations of stayers—or units whose regressor values change little across periods—and movers—or units whose regressor values change substantially across periods. We propose a feasible estimator based on our identification result and characterize its large sample properties. While irregularity precludes our estimator from attaining parametric rates of convergence, its limiting distribution is normal and inference is straightforward to conduct. Standard software may be used to compute point estimates and standard errors. We use our methods to estimate the average elasticity of calorie consumption with respect to total outlay for a sample of poor Nicaraguan households.  相似文献   

13.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

14.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

15.
The aim of this study was to develop a reliable and valid measure of hurricane risk perception. The utility of such a measure lies in the need to understand how people make decisions when facing an evacuation order. This study included participants located within a 15‐mile buffer of the Gulf and southeast Atlantic U.S. coasts. The study was executed as a three‐wave panel with mail surveys in 2010–2012 (T0 baseline N = 629, 56%; T1 retention N = 427, 75%; T2 retention N = 350, 89%). An inventory based on the psychometric model was developed to discriminate cognitive and affective perceptions of hurricane risk, and included open‐ended responses to solicit additional concepts in the T0 survey. Analysis of the T0 data modified the inventory and this revised item set was fielded at T1 and then replicated at T2. The resulting scales were assessed for validity against existing measures for perception of hurricane risk, dispositional optimism, and locus of control. A measure of evacuation expectation was also examined as a dependent variable, which was significantly predicted by the new measures. The resulting scale was found to be reliable, stable, and largely valid against the comparison measures. Despite limitations involving sample size, bias, and the strength of some reliabilities, it was concluded that the measure has potential to inform approaches to hurricane preparedness efforts and advance planning for evacuation messages, and that the measure has good promise to generalize to other contexts in natural hazards as well as other domains of risk.  相似文献   

16.
This paper considers large N and large T panel data models with unobservable multiple interactive effects, which are correlated with the regressors. In earnings studies, for example, workers' motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, interactive effects represent unobservable common shocks and their heterogeneous impacts on cross sections. We consider identification, consistency, and the limiting distribution of the interactive‐effects estimator. Under both large N and large T, the estimator is shown to be consistent, which is valid in the presence of correlations and heteroskedasticities of unknown form in both dimensions. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. In addition, we consider identification and estimation of models in the presence of a grand mean, time‐invariant regressors, and common regressors. Given identification, the rate of convergence and limiting results continue to hold.  相似文献   

17.
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section (n) and time series (T) observations. The limit theory allows for both sequential limits, wherein T followed by n, and joint limits where T, n simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.  相似文献   

18.
Fixed effects estimators of panel models can be severely biased because of the well‐known incidental parameters problem. We show that this bias can be reduced by using a panel jackknife or an analytical bias correction motivated by large T. We give bias corrections for averages over the fixed effects, as well as model parameters. We find large bias reductions from using these approaches in examples. We consider asymptotics where T grows with n, as an approximation to the properties of the estimators in econometric applications. We show that if T grows at the same rate as n, the fixed effects estimator is asymptotically biased, so that asymptotic confidence intervals are incorrect, but that they are correct for the panel jackknife. We show T growing faster than n1/3 suffices for correctness of the analytic correction, a property we also conjecture for the jackknife.  相似文献   

19.
Uncertainty analyses and the reporting of their results can be misinterpreted when these analyses are conditional on a set of assumptions generally intended to bring some conservatism in the decisions. In this paper, two cases of conditional uncertainty analysis are examined. The first case includes studies that result, for instance, in a family of risk curves representing percentiles of the probability distribution of the future frequency of exceeding specified consequence levels conditional on a set of hypotheses. The second case involves analyses that result in an interval of outcomes estimated on the basis of conservative assumptions. Both types of results are difficult to use because they are sometimes misinterpreted as if they represented the output of a full uncertainty analysis. In the first case, the percentiles shown on each risk curve may be taken at face value when in reality (in marginal terms) they are lower if the chosen hypotheses are conservative. In the second case, the fact that some segments of the resulting interval are highly unlikely—or that some more benign segments outside the range of results are quite possible—does not appear. Also, these results are difficult to compare to those of analyses of other risks, possibly competing for the same risk management resources, and the decision criteria have to be adapted to the conservatism of the hypotheses. In this paper, the focus is on the first type (conditional risk curves) more than on the second and the discussion is illustrated by the case of the performance assessment of the Waste Isolation Pilot Plant in New Mexico. For policy-making purposes, however, the problems of interpretation, comparison, and use of the results are similar.  相似文献   

20.
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2‐consistent in general and describe conditions under which matching estimators do attain N1/2‐consistency. Second, we show that even in settings where matching estimators are N1/2‐consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号