首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper, we consider identification and estimation in panel data discrete choice models when the explanatory variable set includes strictly exogenous variables, lags of the endogenous dependent variable as well as unobservable individual‐specific effects. For the binary logit model with the dependent variable lagged only once, Chamberlain (1993) gave conditions under which the model is not identified. We present a stronger set of conditions under which the parameters of the model are identified. The identification result suggests estimators of the model, and we show that these are consistent and asymptotically normal, although their rate of convergence is slower than the inverse of the square root of the sample size. We also consider identification in the semiparametric case where the logit assumption is relaxed. We propose an estimator in the spirit of the conditional maximum score estimator (Manski (1987)) and we show that it is consistent. In addition, we discuss an extension of the identification result to multinomial discrete choice models, and to the case where the dependent variable is lagged twice. Finally, we present some Monte Carlo evidence on the small sample performance of the proposed estimators for the binary response model.  相似文献   

2.
In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for it. This paper studies nonparametric identifiability of type probabilities and type‐specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work under different assumptions on the Markov property, stationarity, and type‐invariance in the transition process. Three elements emerge as the important determinants of identification: the time‐dimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case where the transition function is type‐invariant, a time‐dimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Identification is achieved even when state dependence is present if a model is stationary first‐order Markovian and the panel has a moderate time‐dimension (T 6).  相似文献   

3.
In nonlinear panel data models, the incidental parameter problem remains a challenge to econometricians. Available solutions are often based on ingenious, model‐specific methods. In this paper, we propose a systematic approach to construct moment restrictions on common parameters that are free from the individual fixed effects. This is done by an orthogonal projection that differences out the unknown distribution function of individual effects. Our method applies generally in likelihood models with continuous dependent variables where a condition of non‐surjectivity holds. The resulting method‐of‐moments estimators are root‐N consistent (for fixed T) and asymptotically normal, under regularity conditions that we spell out. Several examples and a small‐scale simulation exercise complete the paper.  相似文献   

4.
Abstract. Despite the apparent stability of the wage bargaining institutions in West Germany, aggregate union membership has been declining dramatically since the early 1990s. However, aggregate gross membership numbers do not distinguish between employment status and it is impossible to disaggregate these sufficiently. This paper uses four waves of the German Socio‐economic Panel in 1985, 1989, 1993, and 1998 to perform a panel analysis of net union membership among employees. We estimate a correlated random‐effects probit model suggested by Chamberlain (Handbook of Econometrics, Vol. II, Amsterdam: Elsevier Science, 1984) to take proper account of individual‐specific effects. Our results suggest that at the individual level the propensity to be a union member has not changed considerably over time. Thus, the aggregate decline in membership is due to composition effects. We also use the estimates to predict net union density at the industry level based on the IAB employment subsample for the time period 1985–97.  相似文献   

5.
This paper considers a panel data model for predicting a binary outcome. The conditional probability of a positive response is obtained by evaluating a given distribution function (F) at a linear combination of the predictor variables. One of the predictor variables is unobserved. It is a random effect that varies across individuals but is constant over time. The semiparametric aspect is that the conditional distribution of the random effect, given the predictor variables, is unrestricted. This paper has two results. If the support of the observed predictor variables is bounded, then identification is possible only in the logistic case. Even if the support is unbounded, so that (from Manski (1987)) identification holds quite generally, the information bound is zero unless F is logistic. Hence consistent estimation at the standard pn rate is possible only in the logistic case.  相似文献   

6.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

7.
In this paper we study identification and estimation of a correlated random coefficients (CRC) panel data model. The outcome of interest varies linearly with a vector of endogenous regressors. The coefficients on these regressors are heterogenous across units and may covary with them. We consider the average partial effect (APE) of a small change in the regressor vector on the outcome (cf. Chamberlain (1984), Wooldridge (2005a)). Chamberlain (1992) calculated the semiparametric efficiency bound for the APE in our model and proposed a √N‐consistent estimator. Nonsingularity of the APE's information bound, and hence the appropriateness of Chamberlain's (1992) estimator, requires (i) the time dimension of the panel (T) to strictly exceed the number of random coefficients (p) and (ii) strong conditions on the time series properties of the regressor vector. We demonstrate irregular identification of the APE when T = p and for more persistent regressor processes. Our approach exploits the different identifying content of the subpopulations of stayers—or units whose regressor values change little across periods—and movers—or units whose regressor values change substantially across periods. We propose a feasible estimator based on our identification result and characterize its large sample properties. While irregularity precludes our estimator from attaining parametric rates of convergence, its limiting distribution is normal and inference is straightforward to conduct. Standard software may be used to compute point estimates and standard errors. We use our methods to estimate the average elasticity of calorie consumption with respect to total outlay for a sample of poor Nicaraguan households.  相似文献   

8.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

9.
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback‐Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed.  相似文献   

10.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

11.
The conventional heteroskedasticity‐robust (HR) variance matrix estimator for cross‐sectional regression (with or without a degrees‐of‐freedom adjustment), applied to the fixed‐effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than 2) as the number of entities n increases. We provide a bias‐adjusted HR estimator that is ‐consistent under any sequences (n, T) in which n and/or T increase to ∞. This estimator can be extended to handle serial correlation of fixed order.  相似文献   

12.
The study presents an integrated, rigorous statistical approach to define the likelihood of a threshold and point of departure (POD) based on dose–response data using nested family of bent‐hyperbola models. The family includes four models: the full bent‐hyperbola model, which allows for transition between two linear regiments with various levels of smoothness; a bent‐hyperbola model reduced to a spline model, where the transition is fixed to a knot; a bent‐hyperbola model with a restricted negative asymptote slope of zero, named hockey‐stick with arc (HS‐Arc); and spline model reduced further to a hockey‐stick type model (HS), where the first linear segment has a slope of zero. A likelihood‐ratio test is used to discriminate between the models and determine if the more flexible versions of the model provide better or significantly better fit than a hockey‐stick type model. The full bent‐hyperbola model can accommodate both threshold and nonthreshold behavior, can take on concave up and concave down shapes with various levels of curvature, can approximate the biochemically relevant Michaelis–Menten model, and even be reduced to a straight line. Therefore, with the use of this model, the presence or absence of a threshold may even become irrelevant and the best fit of the full bent‐hyperbola model be used to characterize the dose–response behavior and risk levels, with no need for mode of action (MOA) information. Point of departure (POD), characterized by exposure level at which some predetermined response is reached, can be defined using the full model or one of the better fitting reduced models.  相似文献   

13.
This paper develops a generalization of the widely used difference‐in‐differences method for evaluating the effects of policy changes. We propose a model that allows the control and treatment groups to have different average benefits from the treatment. The assumptions of the proposed model are invariant to the scaling of the outcome. We provide conditions under which the model is nonparametrically identified and propose an estimator that can be applied using either repeated cross section or panel data. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean–variance trade‐off. We also propose methods for inference, showing that our estimator for the average treatment effect is root‐N consistent and asymptotically normal. We consider extensions to allow for covariates, discrete dependent variables, and multiple groups and time periods.  相似文献   

14.
We consider the estimation of dynamic panel data models in the presence of incidental parameters in both dimensions: individual fixed‐effects and time fixed‐effects, as well as incidental parameters in the variances. We adopt the factor analytical approach by estimating the sample variance of individual effects rather than the effects themselves. In the presence of cross‐sectional heteroskedasticity, the factor method estimates the average of the cross‐sectional variances instead of the individual variances. The method thereby eliminates the incidental‐parameter problem in the means and in the variances over the cross‐sectional dimension. We further show that estimating the time effects and heteroskedasticities in the time dimension does not lead to the incidental‐parameter bias even when T and N are comparable. Moreover, efficient and robust estimation is obtained by jointly estimating heteroskedasticities.  相似文献   

15.
In this article, we discuss an outage‐forecasting model that we have developed. This model uses very few input variables to estimate hurricane‐induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.  相似文献   

16.
Fixed effects estimators of panel models can be severely biased because of the well‐known incidental parameters problem. We show that this bias can be reduced by using a panel jackknife or an analytical bias correction motivated by large T. We give bias corrections for averages over the fixed effects, as well as model parameters. We find large bias reductions from using these approaches in examples. We consider asymptotics where T grows with n, as an approximation to the properties of the estimators in econometric applications. We show that if T grows at the same rate as n, the fixed effects estimator is asymptotically biased, so that asymptotic confidence intervals are incorrect, but that they are correct for the panel jackknife. We show T growing faster than n1/3 suffices for correctness of the analytic correction, a property we also conjecture for the jackknife.  相似文献   

17.
In spite of increased attention to quality and efforts to provide safe medical care, adverse events (AEs) are still frequent in clinical practice. Reports from various sources indicate that a substantial number of hospitalized patients suffer treatment‐caused injuries while in the hospital. While risk cannot be entirely eliminated from health‐care activities, an important goal is to develop effective and durable mitigation strategies to render the system “safer.” In order to do this, though, we must develop models that comprehensively and realistically characterize the risk. In the health‐care domain, this can be extremely challenging due to the wide variability in the way that health‐care processes and interventions are executed and also due to the dynamic nature of risk in this particular domain. In this study, we have developed a generic methodology for evaluating dynamic changes in AE risk in acute care hospitals as a function of organizational and nonorganizational factors, using a combination of modeling formalisms. First, a system dynamics (SD) framework is used to demonstrate how organizational‐level and policy‐level contributions to risk evolve over time, and how policies and decisions may affect the general system‐level contribution to AE risk. It also captures the feedback of organizational factors and decisions over time and the nonlinearities in these feedback effects. SD is a popular approach to understanding the behavior of complex social and economic systems. It is a simulation‐based, differential equation modeling tool that is widely used in situations where the formal model is complex and an analytical solution is very difficult to obtain. Second, a Bayesian belief network (BBN) framework is used to represent patient‐level factors and also physician‐level decisions and factors in the management of an individual patient, which contribute to the risk of hospital‐acquired AE. BBNs are networks of probabilities that can capture probabilistic relations between variables and contain historical information about their relationship, and are powerful tools for modeling causes and effects in many domains. The model is intended to support hospital decisions with regard to staffing, length of stay, and investments in safety, which evolve dynamically over time. The methodology has been applied in modeling the two types of common AEs: pressure ulcers and vascular‐catheter‐associated infection, and the models have been validated with eight years of clinical data and use of expert opinion.  相似文献   

18.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

19.
Instrumental variables are widely used in applied econometrics to achieve identification and carry out estimation and inference in models that contain endogenous explanatory variables. In most applications, the function of interest (e.g., an Engel curve or demand function) is assumed to be known up to finitely many parameters (e.g., a linear model), and instrumental variables are used to identify and estimate these parameters. However, linear and other finite‐dimensional parametric models make strong assumptions about the population being modeled that are rarely if ever justified by economic theory or other a priori reasoning and can lead to seriously erroneous conclusions if they are incorrect. This paper explores what can be learned when the function of interest is identified through an instrumental variable but is not assumed to be known up to finitely many parameters. The paper explains the differences between parametric and nonparametric estimators that are important for applied research, describes an easily implemented nonparametric instrumental variables estimator, and presents empirical examples in which nonparametric methods lead to substantive conclusions that are quite different from those obtained using standard, parametric estimators.  相似文献   

20.
Nonseparable panel models are important in a variety of economic settings, including discrete choice. This paper gives identification and estimation results for nonseparable models under time‐homogeneity conditions that are like “time is randomly assigned” or “time is an instrument.” Partial‐identification results for average and quantile effects are given for discrete regressors, under static or dynamic conditions, in fully nonparametric and in semiparametric models, with time effects. It is shown that the usual, linear, fixed‐effects estimator is not a consistent estimator of the identified average effect, and a consistent estimator is given. A simple estimator of identified quantile treatment effects is given, providing a solution to the important problem of estimating quantile treatment effects from panel data. Bounds for overall effects in static and dynamic models are given. The dynamic bounds provide a partial‐identification solution to the important problem of estimating the effect of state dependence in the presence of unobserved heterogeneity. The impact of T, the number of time periods, is shown by deriving shrinkage rates for the identified set as T grows. We also consider semiparametric, discrete‐choice models and find that semiparametric panel bounds can be much tighter than nonparametric bounds. Computationally convenient methods for semiparametric models are presented. We propose a novel inference method that applies in panel data and other settings and show that it produces uniformly valid confidence regions in large samples. We give empirical illustrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号