首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article analyzes a growing group of fixed T dynamic panel data estimators with a multifactor error structure. We use a unified notational approach to describe these estimators and discuss their properties in terms of deviations from an underlying set of basic assumptions. Furthermore, we consider the extendability of these estimators to practical situations that may frequently arise, such as their ability to accommodate unbalanced panels and common observed factors. Using a large-scale simulation exercise, we consider scenarios that remain largely unexplored in the literature, albeit being of great empirical relevance. In particular, we examine (i) the effect of the presence of weakly exogenous covariates, (ii) the effect of changing the magnitude of the correlation between the factor loadings of the dependent variable and those of the covariates, (iii) the impact of the number of moment conditions on bias and size for GMM estimators, and finally (iv) the effect of sample size. We apply each of these estimators to a crime application using a panel data set of local government authorities in New South Wales, Australia; we find that the results bear substantially different policy implications relative to those potentially derived from standard dynamic panel GMM estimators. Thus, our study may serve as a useful guide to practitioners who wish to allow for multiplicative sources of unobserved heterogeneity in their model.  相似文献   

2.
In this paper, we consider dynamic panel data models where the autoregressive parameter changes over time. We propose the GMM and ML estimators for this model. We conduct Monte Carlo simulation to compare the performance of these two estimators. The simulation results show that the ML estimator outperforms the GMM estimator.  相似文献   

3.
The existing studies on spatial dynamic panel data model (SDPDM) mainly focus on the normality assumption of response variables and random effects. This assumption may be inappropriate in some applications. This paper proposes a new SDPDM by assuming that response variables and random effects follow the multivariate skew-normal distribution. A Markov chain Monte Carlo algorithm is developed to evaluate Bayesian estimates of unknown parameters and random effects in skew-normal SDPDM by combining the Gibbs sampler and the Metropolis–Hastings algorithm. A Bayesian local influence analysis method is developed to simultaneously assess the effect of minor perturbations to the data, priors and sampling distributions. Simulation studies are conducted to investigate the finite-sample performance of the proposed methodologies. An example is illustrated by the proposed methodologies.  相似文献   

4.
ABSTRACT

This paper proposes an exponential class of dynamic binary choice panel data models for the analysis of short T (time dimension) large N (cross section dimension) panel data sets that allow for unobserved heterogeneity (fixed effects) to be arbitrarily correlated with the covariates. The paper derives moment conditions that are invariant to the fixed effects which are then used to identify and estimate the parameters of the model. Accordingly, generalized method of moments (GMM) estimators are proposed that are consistent and asymptotically normally distributed at the root-N rate. We also study the conditional likelihood approach and show that under exponential specification, it can identify the effect of state dependence but not the effects of other covariates. Monte Carlo experiments show satisfactory finite sample performance for the proposed estimators and investigate their robustness to misspecification.  相似文献   

5.
This paper proposes a generalized least squares and a generalized method of moment estimators for dynamic panel data models with both individual-specific and time-specific effects. We also demonstrate that the common estimators ignoring the presence of time-specific effects are inconsistent when N→∞N but T is finite if the time-specific effects are indeed present. Monte Carlo studies are also conducted to investigate the finite sample properties of various estimators. It is found that the generalized least squares estimator has the smallest bias and root mean square error, and also has nominal size close to the empirical size. It is also found that even when there is no presence of time-specific effects, there is hardly any efficiency loss of the generalized least squares estimator assuming its presence compared to the generalized least squares estimator allowing only the presence of individual-specific effects.  相似文献   

6.
The statistical methods for analyzing spatial count data have often been based on random fields so that a latent variable can be used to specify the spatial dependence. In this article, we introduce two frequentist approaches for estimating the parameters of model-based spatial count variables. The comparison has been carried out by a simulation study. The performance is also evaluated using a real dataset and also by the simulation study. The simulation results show that the maximum likelihood estimator appears to be with the better sampling properties.  相似文献   

7.
This article extends the spatial panel data regression with fixed-effects to the case where the regression function is partially linear and some regressors may be endogenous or predetermined. Under the assumption that the spatial weighting matrix is strictly exogenous, we propose a sieve two stage least squares (S2SLS) regression. Under some sufficient conditions, we show that the proposed estimator for the finite dimensional parameter is root-N consistent and asymptotically normally distributed and that the proposed estimator for the unknown function is consistent and also asymptotically normally distributed but at a rate slower than root-N. Consistent estimators for the asymptotic variances of the proposed estimators are provided. A small scale simulation study is conducted, and the simulation results show that the proposed procedure has good finite sample performance.  相似文献   

8.
Conditionally autoregressive (CAR) models are often used to analyze a spatial process observed over a lattice or a set of irregular regions. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. To accommodate directional and inherent anisotropy variation, a new class of spatial models is proposed that adaptively determines neighbors based on a bivariate kernel using the distances and angles between the centroid of the regions. The newly proposed model generalizes the usual CAR model in a sense of accounting for adaptively determined weights. Maximum likelihood estimators are derived and simulation studies are presented for the sampling properties of the estimates on the new model, which is compared to the CAR model. Finally the method is illustrated using a data set on the elevated blood lead levels of children under the age of 72 months observed in Virginia in the year of 2000.  相似文献   

9.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.  相似文献   

10.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.

  相似文献   

11.
12.
ABSTRACT

This paper considers panel data models with fixed effects which have grouped patterns with unknown group membership. A two-stage estimation (TSE) procedure is developed to improve the properties of the GFE estimators of common parameters when the time span is small. Firstly, the common parameters are estimated. Subsequently, the optimal group assignment and the estimators of group effects are obtained by the K-means algorithm. Monte Carlo results reveal that the TSE estimator has a much smaller bias than the GFE estimator when the values of difference between effects are moderately small or at high variance of the idiosyncratic error.  相似文献   

13.
Myoung Jin Jang 《Statistics》2013,47(1):101-120
We consider a panel model with spatial autocorrelation and heterogeneity across time. Various Lagrange multiplier and likelihood ratio test statistics are developed for testing time effects and spatial effects, jointly, marginally or conditionally. Limiting null distributions of the tests are derived. Size and power performances of the proposed tests are compared by a Monte-Carlo experiment.  相似文献   

14.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

15.
Due to the escalating growth of big data sets in recent years, new Bayesian Markov chain Monte Carlo (MCMC) parallel computing methods have been developed. These methods partition large data sets by observations into subsets. However, for Bayesian nested hierarchical models, typically only a few parameters are common for the full data set, with most parameters being group specific. Thus, parallel Bayesian MCMC methods that take into account the structure of the model and split the full data set by groups rather than by observations are a more natural approach for analysis. Here, we adapt and extend a recently introduced two-stage Bayesian hierarchical modeling approach, and we partition complete data sets by groups. In stage 1, the group-specific parameters are estimated independently in parallel. The stage 1 posteriors are used as proposal distributions in stage 2, where the target distribution is the full model. Using three-level and four-level models, we show in both simulation and real data studies that results of our method agree closely with the full data analysis, with greatly increased MCMC efficiency and greatly reduced computation times. The advantages of our method versus existing parallel MCMC computing methods are also described.  相似文献   

16.
This paper studies estimation of a partially specified spatial panel data linear regression with random-effects and spatially correlated error components. Under the assumption of exogenous spatial weighting matrix and exogenous regressors, the unknown parameter is estimated by applying the instrumental variable estimation. Under some sufficient conditions, the proposed estimator for the finite dimensional parameters is shown to be root-N consistent and asymptotically normally distributed; the proposed estimator for the unknown function is shown to be consistent and asymptotically distributed as well, though at a rate slower than root-N. Consistent estimators for the asymptotic variance–covariance matrices of both the parametric and unknown components are provided. The Monte Carlo simulation results suggest that the approach has some practical value.  相似文献   

17.
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers.  相似文献   

18.
Maximum likelihood (ML) estimation with spatial econometric models is a long-standing problem that finds application in several areas of economic importance. The problem is particularly challenging in the presence of missing data, since there is an implied dependence between all units, irrespective of whether they are observed or not. Out of the several approaches adopted for ML estimation in this context, that of LeSage and Pace [Models for spatially dependent missing data. J Real Estate Financ Econ. 2004;29(2):233–254] stands out as one of the most commonly used with spatial econometric models due to its ability to scale with the number of units. Here, we review their algorithm, and consider several similar alternatives that are also suitable for large datasets. We compare the methods through an extensive empirical study and conclude that, while the approximate approaches are suitable for large sampling ratios, for small sampling ratios the only reliable algorithms are those that yield exact ML or restricted ML estimates.  相似文献   

19.
HIV viral dynamic models have received much attention in the literature. Long-term viral dynamics may be modelled by semiparametric nonlinear mixed-effect models, which incorporate large variation between subjects and autocorrelation within subjects and are flexible in modelling complex viral load trajectories. Time-dependent covariates may be introduced in the dynamic models to partially explain the between-individual variations. In the presence of measurement errors and missing data in time-dependent covariates, we show that the commonly used two-step method may give approximately unbiased estimates but may under-estimate standard errors. We propose a two-stage bootstrap method to adjust the standard errors in the two-step method and a likelihood method.  相似文献   

20.
The generalized estimating equation (GEE) approach to the analysis of longitudinal data has many attractive robustness properties and can provide a 'population average' characterization of interest, for example, to clinicians who have to treat patients on the basis of their observed characteristics. However, these methods have limitations which restrict their usefulness in both the social and the medical sciences. This conclusion is based on the premise that the main motivations for longitudinal analysis are insight into microlevel dynamics and improved control for omitted or unmeasured variables. We claim that to address these issues a properly formulated random-effects model is required. In addition to a theoretical assessment of some of the issues, we illustrate this by reanalysing data on polyp counts. In this example, the covariates include a base-line outcome, and the effectiveness of the treatment seems to vary by base-line. We compare the random-effects approach with the GEE approach and conclude that the GEE approach is inappropriate for assessing the treatment effects for these data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号