首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this article, a parametric framework for estimation and inference in cointegrated panel data models is considered that is based on a cointegrated VAR(p) model. A convenient two-step estimator is suggested where, in the first step, all individual specific parameters are estimated, and in the second step, the long-run parameters are estimated from a pooled least-squares regression. The two-step estimator and related test procedures can easily be modified to account for contemporaneously correlated errors, a feature that is often encountered in multi-country studies. Monte Carlo simulations suggest that the two-step estimator and related test procedures outperform semiparametric alternatives such as the fully modified OLS approach, especially if the number of time periods is small.  相似文献   

2.
David (1963) and Davidson & Farquhar(1976) contain extensive bibliographies of proposed approaches to problems involving paired comparisons. However, each of the discussed methods that is based on a hypothesis test, relies heavily on the assumption that all paired comparisons are made independently. In this paper we eliminate this assumption and develop a new procedure based on an adaptation of a statistic considered by Kendall & Babington Smith (1940). We show that their original test procedure substantially underestimates the true significance level if the comparisons are not made independently. Our modification utilizes the approach developed in Costello & Wolfe (1985) for the problem of agreement between two groups of judges and relies heavily on computer-generated tables.  相似文献   

3.
Self-reported income information particularly suffers from an intentional coarsening of the data, which is called heaping or rounding. If it does not occur completely at random – which is usually the case – heaping and rounding have detrimental effects on the results of statistical analysis. Conventional statistical methods do not consider this kind of reporting bias, and thus might produce invalid inference. We describe a novel statistical modeling approach that allows us to deal with self-reported heaped income data in an adequate and flexible way. We suggest modeling heaping mechanisms and the true underlying model in combination. To describe the true net income distribution, we use the zero-inflated log-normal distribution. Heaping points are identified from the data by applying a heuristic procedure comparing a hypothetical income distribution and the empirical one. To determine heaping behavior, we employ two distinct models: either we assume piecewise constant heaping probabilities, or heaping probabilities are considered to increase steadily with proximity to a heaping point. We validate our approach by some examples. To illustrate the capacity of the proposed method, we conduct a case study using income data from the German National Educational Panel Study.  相似文献   

4.
This paper studies estimation of a partially specified spatial panel data linear regression with random-effects and spatially correlated error components. Under the assumption of exogenous spatial weighting matrix and exogenous regressors, the unknown parameter is estimated by applying the instrumental variable estimation. Under some sufficient conditions, the proposed estimator for the finite dimensional parameters is shown to be root-N consistent and asymptotically normally distributed; the proposed estimator for the unknown function is shown to be consistent and asymptotically distributed as well, though at a rate slower than root-N. Consistent estimators for the asymptotic variance–covariance matrices of both the parametric and unknown components are provided. The Monte Carlo simulation results suggest that the approach has some practical value.  相似文献   

5.
Comparing treatment means from populations that follow independent normal distributions is a common statistical problem. Many frequentist solutions exist to test for significant differences amongst the treatment means. A different approach would be to determine how likely it is that particular means are grouped as equal. We developed a fiducial framework for this situation. Our method provides fiducial probabilities that any number of means are equal based on the data and the assumed normal distributions. This methodology was developed when there is constant and non-constant variance across populations. Simulations suggest that our method selects the correct grouping of means at a relatively high rate for small sample sizes and asymptotic calculations demonstrate good properties. Additionally, we have demonstrated the flexibility in the methods ability to calculate the fiducial probability for any number of equal means. This was done by analyzing a simulated data set and a data set measuring the nitrogen levels of red clover plants that were inoculated with different treatments.  相似文献   

6.
Fuel coefficients of cement production—one for each process of production—are estimated by explicitly accounting for the multiple-kiln structure of cement plants. Unobserved heterogeneity across plants is found to be significant. Furthermore, since the estimable model is nonlinear in exogenous variables and parameters, a fixed-effects estimator for nonlinear regression is used to obtain the estimates.  相似文献   

7.
RECPAM is a methodology, implemented in a computer program of the same name, for the construction of tree-structured models in Biostatistics. In this work we present algorithms for pruning and amalgamating terminal nodes of a tree, within the RECPAM approach. These algorithms construct sequences of nested models and calculate at each step the AIC of the corresponding model and correct significance levels, according to Gabriel's theory of Simultaneous Test Procedures. As an example, the analysis of data from clinical trials involving patients with Small Cell Carcinoma of the Lung is presented.  相似文献   

8.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

9.
Generalized method of moments (GMM) has been an important innovation in econometrics. Its usefulness has motivated a search for good inference procedures based on GMM. This article presents a novel method of bootstrapping for GMM based on resampling from the empirical likelihood distribution that imposes the moment restrictions. We show that this approach yields a large-sample improvement and is efficient, and give examples. We also discuss the development of GMM and other recent work on improved inference.  相似文献   

10.
The statistical methods for analyzing spatial count data have often been based on random fields so that a latent variable can be used to specify the spatial dependence. In this article, we introduce two frequentist approaches for estimating the parameters of model-based spatial count variables. The comparison has been carried out by a simulation study. The performance is also evaluated using a real dataset and also by the simulation study. The simulation results show that the maximum likelihood estimator appears to be with the better sampling properties.  相似文献   

11.
A hierarchical Bayesian approach to the problem of comparison of two means is considered. Hypothesis testing, ranking and selection, and estimation (after selection) are treated. Under the hypothesis that two means are different, it is desired to select the population which has the larger mean. Expressions for the ranking probability of each mean being the larger and the corresponding estimate of each mean are given. For certain priors, it is possible to express the quantities of interest in closed form. A simulation study has been done to compare mean square errors of a hierarchical Bayesian estimator and some of the existing estimators of the selected mean. The case of comparing two means in the presence of block effects has also been considered and an example is presented to illustrate the methodology.  相似文献   

12.
Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p -value rejection methods based on the observed data. Whereas a sequential p -value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q -value, the pFDR analogue of the p -value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini–Hochberg FDR method.  相似文献   

13.
Summary.  This study examines the effects of the basic wage rate, standard working hours and unionization on paid overtime work in Britain by using individual level data from the New Earnings Survey over the period 1975–2001. For this purpose we estimate a panel data model. We show that to obtain consistent estimates it is important to allow for both the censoring of paid overtime hours at 0 and for correlations between the explanatory variables and unobserved individual-specific effects. The main empirical results are that a reduction in standard hours increases both the incidence of overtime and overtime hours, whereas an increase in the wage rate decreases the incidence of overtime but brings a small increase in overtime hours for those working overtime. For men the effects are stronger than for women. Union coverage is of minor empirical importance. The occupation and industry structure of employment has shifted from high to lower overtime jobs. Taken together, these economic variables can explain almost half of the changing incidence of overtime for men, and most of the change in overtime hours worked by women, but are less successful in explaining the changes in overtime hours worked by men or the incidence of overtime for women.  相似文献   

14.
 由于我国经济发展的区域不均衡性,区域经济的协调发展问题不仅是我国一项长期重要国策,也使其成为社会经济研究的热点问题。本文将空间计量模型与面板分析方法相结合,以我国省域经济发展作为研究对象和数据来源,从而对我国省域经济增长的协调发展和所受影响因素进行研究。结果显示,我国区域经济发展存在着显著的空间相关性,在经济发展的不同阶段影响经济增长的因素不同,对经济发展有持续显著正影响的因素有:人力资本、人口资本、市场化进程和财政支出等。据此本文提出若干政策建议,以促进我国经济协调增长。  相似文献   

15.
The solution to a Liapunov matrix equation (LME) has been proposed to estimate the parameters of the demand equations derived from the Translog, the Almost Ideal Demand System and the Rotterdam demand models. When compared to traditional scemingly unrelated regression (SUR) methods the LME approach saves both computer time and space, and it provides parameter estimates that are less likely to suffer from round-off error. However, the LME method is difficult to implement without the use of specially written computer programs and, unlike traditional SUR methods, it does not automatically provide an estimate of the covariance of the parameters. This paper solves these two problems, the first by providing a simplified solution to the Liapunov matrix equation which can be written in a few lines of code in computer languages such as SAS PROC MATRIX/IMLTM or GAUSSTM; the second, by bootstrapping the parameter covariance matrix.  相似文献   

16.
A common strategy for avoiding information overload in multi-factor paired comparison experiments is to employ pairs of options which have different levels for only some of the factors in a study. For the practically important case where the factors fall into three groups such that all factors within a group have the same number of levels and where one is only interested in estimating the main effects, a comprehensive catalogue of D-optimal approximate designs is presented. These optimal designs use at most three different types of pairs and have a block diagonal information matrix.  相似文献   

17.
Summary.  For rare diseases the observed disease count may exhibit extra Poisson variability, particularly in areas with low or sparse populations. Hence the variance of the estimates of disease risk, the standardized mortality ratios, may be highly unstable. This overdispersion must be taken into account otherwise subsequent maps based on standardized mortality ratios will be misleading and, rather than displaying the true spatial pattern of disease risk, the most extreme values will be highlighted. Neighbouring areas tend to exhibit spatial correlation as they may share more similarities than non-neighbouring areas. The need to address overdispersion and spatial correlation has led to the proposal of Bayesian approaches for smoothing estimates of disease risk. We propose a new model for investigating the spatial variation of disease risks in conjunction with an alternative specification for estimates of disease risk in geographical areas—the multivariate Poisson–gamma model. The main advantages of this new model lie in its simplicity and ability to account naturally for overdispersion and spatial auto-correlation. Exact expressions for important quantities such as expectations, variances and covariances can be easily derived.  相似文献   

18.
Summary.  Composite indicators are increasingly used for bench-marking countries' performances. Yet doubts are often raised about the robustness of the resulting countries' rankings and about the significance of the associated policy message. We propose the use of uncertainty analysis and sensitivity analysis to gain useful insights during the process of building composite indicators, including a contribution to the indicators' definition of quality and an assessment of the reliability of countries' rankings. We discuss to what extent the use of uncertainty and sensitivity analysis may increase transparency or make policy inference more defensible by applying the methodology to a known composite indicator: the United Nations's technology achievement index.  相似文献   

19.
20.
Concerning the task of integrating census and survey data from different sources as it is carried out by supranational statistical agencies, a formal metadata approach is investigated which supports data integration and table processing simultaneously. To this end, a metadata model is devised such that statistical query processing is accomplished by means of symbolic reasoning on machine-readable, operative metadata. As in databases, statistical queries are stated as formal expressions specifying declaratively what the intended output is; the operations necessary to retrieve appropriate available source data and to aggregate source data into the requested macrodata are derived mechanically. Using simple mathematics, this paper focuses particularly on the metadata model devised to harmonize semantically related data sources as well as the table model providing the principal data structure of the proposed system. Only an outline of the general design of a statistical information system based on the proposed metadata model is given and the state of development is summarized briefly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号