首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The “traditional” approach to the estimation of count-panel-data models with fixed effects is the conditional maximum likelihood estimator. The pseudo maximum likelihood principle can be used in these models to obtain orthogonality conditions that generate a robust estimator. This estimator is inconsistent, however, when the instruments are not strictly exogenous. This article proposes a generalized method of moments estimator for count-panel-data models with fixed effects, based on a transformation of the conditional mean specification, that is consistent even when the explanatory variables are predetermined. Two applications are discussed, the relationship between patents and research and development expenditures and the explanation of technology transfer.  相似文献   

2.
This paper extends methods for nonlinear regression analysis that have developed for the analysis of clustered data. Its novelty lies in its dual incorporation of random cluster effects and structural error in the measurement of the explanatory variables. Moments up to second order are assumed to have been specified for the latter to enable a generalized estimating equations approach to be used for fitting and testing nonlinear models linking response to these explanatory variables and random effects. Taylor expansion methods are used, and a difficulty with earlier approaches overcome. Finally we describe an application of this methodology to indicate how it can be used. That application concerns the degree of association of hospital admissions for acute respiratory health problems and air pollution.  相似文献   

3.
The authors examine the equivalence between penalized least squares and state space smoothing using random vectors with infinite variance. They show that despite infinite variance, many time series techniques for estimation, significance testing, and diagnostics can be used. The Kalman filter can be used to fit penalized least squares models, computing the smoothed quantities and related values. Infinite variance is equivalent to differencing to stationarity, and to adding explanatory variables. The authors examine constructs called “smoothations” which they show to be fundamental in smoothing. Applications illustrate concepts and methods.  相似文献   

4.
Hausman test is popularly used to examine the endogeneity of explanatory variables in a regression model. To derive a well-defined asymptotic distribution of Hausman test, the correlation between the instrumental variables and the error term needs to converge to zero. However, it is possible that there remains considerable correlation in finite samples between the instruments and the error, even though their correlation eventually converges to zero. This article investigates the potential problem that such “pseudo-exogenous” instruments may create. We show that the performance of Hausman test is deteriorated when the instruments are asymptotically exogenous but endogenous in finite samples, through Monte Carlo simulations.  相似文献   

5.
The subject of this paper is Bayesian inference about the fixed and random effects of a mixed-effects linear statistical model with two variance components. It is assumed that a priori the fixed effects have a noninformative distribution and that the reciprocals of the variance components are distributed independently (of each other and of the fixed effects) as gamma random variables. It is shown that techniques similar to those employed in a ridge analysis of a response surface can be used to construct a one-dimensional curve that contains all of the stationary points of the posterior density of the random effects. The “ridge analysis” (of the posterior density) can be useful (from a computational standpoint) in finding the number and the locations of the stationary points and can be very informative about various features of the posterior density. Depending on what is revealed by the ridge analysis, a multivariate normal or multivariate-t distribution that is centered at a posterior mode may provide a satisfactory approximation to the posterior distribution of the random effects (which is of the poly-t form).  相似文献   

6.
We propose a method for specifying the distribution of random effects included in a model for cluster data. The class of models we consider includes mixed models and frailty models whose random effects and explanatory variables are constant within clusters. The method is based on cluster residuals obtained by assuming that the random effects are equal between clusters. We exhibit an asymptotic relationship between the cluster residuals and variations of the random effects as the number of observations increases and the variance of the random effects decreases. The asymptotic relationship is used to specify the random-effects distribution. The method is applied to a frailty model and a model used to describe the spread of plant diseases.  相似文献   

7.
Comment     
We propose a sequential test for predictive ability for recursively assessing whether some economic variables have explanatory content for another variable. In the forecasting literature it is common to assess predictive ability by using “one-shot” tests at each estimation period. We show that this practice leads to size distortions, selects overfitted models and provides spurious evidence of in-sample predictive ability, and may lower the forecast accuracy of the model selected by the test. The usefulness of the proposed test is shown in well-known empirical applications to the real-time predictive content of money for output and the selection between linear and nonlinear models.  相似文献   

8.
In many areas of application mixed linear models serve as a popular tool for analyzing highly complex data sets. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators, (RE)ML, are commonly pursued. However, it is well-known that these fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to other violations of distributional assumptions. In this article, we propose a new class of robust-efficient estimators for inference in mixed linear models. The new three-step estimation procedure provides truncated generalized least squares and variance components' estimators with hard-rejection weights adaptively computed from the data. More specifically, our data re-weighting mechanism first detects and removes within-subject outliers, then identifies and discards between-subject outliers, and finally it employs maximum likelihood procedures on the “clean” data. Theoretical efficiency and robustness properties of this approach are established.  相似文献   

9.
Nearest Neighbor Adjusted Best Linear Unbiased Prediction   总被引:1,自引:0,他引:1  
Statistical inference for linear models has classically focused on either estimation or hypothesis testing of linear combinations of fixed effects or of variance components for random effects. A third form of inference—prediction of linear combinations of fixed and random effects—has important advantages over conventional estimators in many applications. None of these approaches will result in accurate inference if the data contain strong, unaccounted for local gradients, such as spatial trends in field-plot data. Nearest neighbor methods to adjust for such trends have been widely discussed in recent literature. So far, however, these methods have been developed exclusively for classical estimation and hypothesis testing. In this article a method of obtaining nearest neighbor adjusted (NNA) predictors, along the lines of “best linear unbiased prediction,” or BLUP, is developed. A simulation study comparing “NNABLUP” to conventional NNA methods and to non-NNA alternatives suggests considerable potential for improved efficiency.  相似文献   

10.
Abstract

For non-negative integer-valued random variables, the concept of “damaged” observations was introduced, for the first time, by Rao and Rubin [Rao, C. R., Rubin, H. (1964). On a characterization of the Poisson distribution. Sankhya 26:295–298] in 1964 on a paper concerning the characterization of Poisson distribution. In 1965, Rao [Rao, C. R. (1965). On discrete distribution arising out of methods of ascertainment. Sankhya Ser. A. 27:311–324] discusses some results related with inferences for parameters of a Poisson Model when it has occurred partial destruction of observations. A random variable is said to be damaged if it is unobservable, due to a damage mechanism which randomly reduces its magnitude. In subsequent years, considerable attention has been given to characterizations of distributions of such random variables that satisfy the “Rao–Rubin” condition. This article presents some inference aspects of a damaged Poisson distribution, under reasonable assumption that, when an observation on the random variable is made, it is also possible to determine whether or not some damage has occurred. In other words, we do not know how many items are damaged, but we can identify the existence of damage. Particularly it is illustrated the situation in which it is possible to identify the occurrence of some damage although it is not possible to determine the amount of items damaged. Maximum likelihood estimators of the underlying parameters and their asymptotic covariance matrix are obtained. Convergence of the estimates of parameters to the asymptotic values are studied through Monte Carlo simulations.  相似文献   

11.
Beta regression models provide an adequate approach for modeling continuous outcomes limited to the interval (0, 1). This paper deals with an extension of beta regression models that allow for explanatory variables to be measured with error. The structural approach, in which the covariates measured with error are assumed to be random variables, is employed. Three estimation methods are presented, namely maximum likelihood, maximum pseudo-likelihood and regression calibration. Monte Carlo simulations are used to evaluate the performance of the proposed estimators and the naïve estimator. Also, a residual analysis for beta regression models with measurement errors is proposed. The results are illustrated in a real data set.  相似文献   

12.
Different approaches for estimation of change in biomass between two points in time by means of airborne laser scanner data were tested. Both field and laser data were collected at two occasions on 52 sample plots in a mountain forest in southeastern Norway. In the first approach, biomass change was estimated as the difference between predicted biomass for the two measurement occasions. Joint models for the biomass at both occasions were fitted using different height and density variables from laser data as explanatory variables. The second approach modelled the observed change directly using the change in different variables extracted from the laser data as explanatory variables. In the third approach we modelled the relative change in biomass. The explanatory variables were also expressed as relative change between measurement occasions. In all approaches we allowed spline terms to be entered. We also investigated the aptness of models for which the residual variance was modeled by allowing it to be proportional to the area of the plot on which biomass was assessed. All alternative models were initially assessed by AIC. All models were also evaluated by estimating biomass change on the model development data. This evaluation indicated that the two direct approaches (approach 2 and 3) were better than relying on modeling biomass at both occasions and taking change as the difference between biomass estimates. Approach 2 seemed to be slightly better than approach 3 based on assessments of bias in the evaluation.  相似文献   

13.
A variance components model with response variable depending on both fixed effects of explanatory variables and random components is specified to model longitudinal circular data, in order to study the directional behaviour of small animals, as insects, crustaceans, amphipods, etc. Unknown parameter estimators are obtained using a simulated maximum likelihood approach. Issues concerning log-likelihood variability and the related problems in the optimization algorithm are also addressed. The procedure is applied to the analysis of directional choices under full natural conditions ofTalitrus saltator from Castiglione della Pescaia (Italy) beaches.  相似文献   

14.
Bootstrap methods are proposed for estimating sampling distributions and associated statistics for regression parameters in multivariate survival data. We use an Independence Working Model (IWM) approach, fitting margins independently, to obtain consistent estimates of the parameters in the marginal models. Resampling procedures, however, are applied to an appropriate joint distribution to estimate covariance matrices, make bias corrections, and construct confidence intervals. The proposed methods allow for fixed or random explanatory variables, the latter case using extensions of existing resampling schemes (Loughin,1995), and they permit the possibility of random censoring. An application is shown for the viral positivity time data previously analyzed by Wei, Lin, and Weissfeld (1989). A simulation study of small-sample properties shows that the proposed bootstrap procedures provide substantial improvements in variance estimation over the robust variance estimator commonly used with the IWM. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

15.
We investigate longitudinal models having Brownian-motion covariance structure. We show that any such model can be viewed as arising from a related “timeless” classical linear model where sample sizes correspond to longitudinal observation times. This relationship is of practical impact when there are closed-form ANOVA tables for the related classical model. Such tables can be directly transformed into the analogous tables for the original longitudinal model. We in particular provide complete results for one-way fixed and random effects ANOVA on the drift parameter in Brownian motion, and illustrate its use in estimating heterogeneity in tumor growth rates.  相似文献   

16.
In this article, we develop a new and novel kernel density estimator for a sum of weighted averages from a single population based on utilizing the well defined kernel density estimator in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of weighed averages from two independent populations. The resulting estimator is “bootstrap-like” in terms of its properties with respect to the derivation of approximate confidence intervals via a “plug-in” approach. This new approach is distinct from the bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function through direct calculation. Thus, our approach eliminates the error due to Monte Carlo resampling that arises within the context of simulation based approaches that are oftentimes necessary in order to derive bootstrap-based confidence intervals for statistics involving weighted averages of i.i.d. random variables. We provide several examples and carry forth a simulation study to show that our kernel density estimator performs better than the standard central limit theorem based approximation in term of coverage probability.  相似文献   

17.
A novel fully Bayesian approach for modeling survival data with explanatory variables using the Piecewise Exponential Model (PEM) with random time grid is proposed. We consider a class of correlated Gamma prior distributions for the failure rates. Such prior specification is obtained via the dynamic generalized modeling approach jointly with a random time grid for the PEM. A product distribution is considered for modeling the prior uncertainty about the random time grid, turning possible the use of the structure of the Product Partition Model (PPM) to handle the problem. A unifying notation for the construction of the likelihood function of the PEM, suitable for both static and dynamic modeling approaches, is considered. Procedures to evaluate the performance of the proposed model are provided. Two case studies are presented in order to exemplify the methodology. For comparison purposes, the data sets are also fitted using the dynamic model with fixed time grid established in the literature. The results show the superiority of the proposed model.  相似文献   

18.
As researchers increasingly rely on linear mixed models to characterize longitudinal data, there is a need for improved techniques for selecting among this class of models which requires specification of both fixed and random effects via a mean model and variance-covariance structure. The process is further complicated when fixed and/or random effects are non nested between models. This paper explores the development of a hypothesis test to compare non nested linear mixed models based on extensions of the work begun by Sir David Cox. We assess the robustness of this approach for comparing models containing correlated measures of body fat for predicting longitudinal cardiometabolic risk.  相似文献   

19.
When data sets are multilevel (group nesting or repeated measures), different sources of variations must be identified. In the framework of unsupervised analyses, multilevel simultaneous component analysis (MSCA) has recently been proposed as the most satisfactory option for analyzing multilevel data. MSCA estimates submodels for the different levels in data and thereby separates the “within”-subject and “between”-subject variations in the variables. Following the principles of MSCA and the strategy of decomposing the available data matrix into orthogonal blocks, and taking into account the between- and the within data structures, we generalize, in a multilevel perspective, multivariate models in which a matrix of response variables can be used to guide the projections (formed by responses predicted by explanatory variables or by a limited number of their combinations/composites) into choices of meaningful directions. To this end, the current paper proposes the multilevel version of the multivariate regression model and dimensionality-reduction methods (used to predict responses with fewer linear composites of explanatory variables). The principle findings of the study are that the minimization of the loss functions related to multivariate regression, principal-component regression, reduced-rank regression, and canonical-correlation regression are equivalent to the separate minimization of the sum of two separate loss functions corresponding to the between and within structures, under some constraints. The paper closes with a case study of an application focusing on the relationships between mental health severity and the intensity of care in the Lombardy region mental health system.  相似文献   

20.
This article provides a strategy to identify the existence and direction of a causal effect in a generalized nonparametric and nonseparable model identified by instrumental variables. The causal effect concerns how the outcome depends on the endogenous treatment variable. The outcome variable, treatment variable, other explanatory variables, and the instrumental variable can be essentially any combination of continuous, discrete, or “other” variables. In particular, it is not necessary to have any continuous variables, none of the variables need to have large support, and the instrument can be binary even if the corresponding endogenous treatment variable and/or outcome is continuous. The outcome can be mismeasured or interval-measured, and the endogenous treatment variable need not even be observed. The identification results are constructive, and can be empirically implemented using standard estimation results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号