首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
利用2000-2009年新疆农村收入分组数据,首先计算历年新疆农村FGT贫困指数,分析新疆农村贫困程度的变化情况;其次对新疆农村FGT贫困指数进行分解,探讨经济增长、收入分配和贫困线变动对新疆农村贫困程度的影响;最后模拟分析各个贫困指标对贫困线变动的敏感程度。研究结果表明:新千年开始后新疆农村贫困程度变化具有一定的阶段性特点,大体经历了大幅下降、平缓变动、小幅上扬三个阶段;FGT贫困指数分解后发现,各期影响因素作用不一,经济增长的减贫效应最为明显,收入分配状况的改善或者恶化产生了不一样的减贫作用,贫困线上调最明显的效应是新疆农村贫困面扩大;模拟贫困线上调后发现各贫困指标对贫困线较小幅度的变动具有较高的敏感性。新疆农村的贫困问题比较复杂,最近有进一步加重的趋势,尤其是最贫困人口的境况需要获得更多的关注,基于此提出了相关政策建议。  相似文献   

2.
This paper investigates the extent childbearing among couples in Europe affects their level of economic well being. We do so by implementing a propensity score matching procedure in combination with a difference-in-difference estimator. Using data from European Community Household Panel Survey (ECHP), we compare how the impact of childbearing on wellbeing varies among countries. We use several measures for wellbeing, including poverty status and various deprivation indices that take into account the multidimensionality of individuals‘ assessment of wellbeing. Not unexpected we find childbearing tend to worsen the economic wellbeing of households, but with important differences in magnitude across countries. In Scandinavian countries the effect is small and rarely significant, it is strong in the UK and also significant in Mediterranean countries. Depending on the measure of wellbeing, we find important differences among countries that are similar in terms of welfare provision.  相似文献   

3.
Poverty can be seen as a multidimensional phenomenon described by a set of indicators, the poverty components. A one-dimensional measure of poverty serving as a ranking index can be obtained by combining the component indicators via aggregation techniques. Ranking indices are thought of as supporting political decisions. This paper proposes an alternative to aggregation based on simple concepts of partial order theory and illustrates the pros and cons of this approach taking as case study a multidimensional measure of poverty comprising three components – absolute poverty, relative poverty and income – computed for the European Union regions. The analysis enables one to highlight conflicts across the components with some regions detected as controversial, with, for example, low levels of relative poverty and high levels of monetary poverty. The partial order approach enables one to point to the regions with the most severe data conflicts and to the component indicators that cause these conflicts.  相似文献   

4.
For an adjustment of contingency tables to prescribed marginal frequencies Deming and Stephan (1940) minimize a Chi-square expression. Asymptotically equivalently, Ireland and Kullback (1968) minimize a Leibler-Kullback divergence, where the probabilistical arguments for both methods remain vague. Here we deduce a probabilistical model based on observed contingency tables. It shows that the two above methods and the maximum likelihood approach in Smith (1947) yield asymptotically the ‘most probable’ adjustment under prescribed marginal frequencies. The fundamental hypothesis of statistical mechanics relates observations to ‘most probable’ realizations. ‘Most probable’ is going to be used in the sense of so-called large deviations. The proposed adjustment has a significant product form and will be generalized to contingency tables with infinitely many cells.  相似文献   

5.
This article introduces the relative deprivation curve to represent the size distribution of income and wealth. The curve has many useful applications in the measurement of poverty and inequality, which are explored. The methodology developed is then applied to the data obtained from the Australian Household Expenditure Survey, 1975–1976.  相似文献   

6.
中国农村亲贫困增长测度及其分解   总被引:3,自引:1,他引:2  
阮敬 《统计研究》2007,24(11):54-58
摘  要:亲贫困增长主要是指能够使贫困群体参与经济活动并从中得到更多好处的经济增长。在讨论经济增长、不平等与贫困规模变动相关关系的基础上,本文构建了一种基于收入分布的亲贫困增长测度方法并用Shapley值法对之进行因素分解,并采用CHNS农村住户调查数据进行实证分析,得到中国经济的高速增长并不能够自动缓解贫困,需要进一步采取瞄准式扶贫措施来解决的结论 。  相似文献   

7.
This work concerns the study of poverty dynamics and the analysis of the influencing socio-demographic factors. A fuzzy and multidimensional approach has been chosen in order to define two different poverty measures. A panel regression model has been estimated and particular attention has been paid to the treatment of the unobservable heterogeneity among longitudinal units. The specified model combines autoregression with variance components. The empirical analysis has been conducted using the data set of the British Household Panel Survey (BHPS) from 1991 to 1997. This work was co-financed by Murst funds for the projects “Occupazione e disoccupazione in Italia: misura e analisi dei comportamenti”. The paper is the result of the common work of all the authors; in particular G. Betti has written Sects. 2,5.1 and 5.3.1; A. D’Agostino has written sections 4, 5.2 and 5.4; L. Neri has written Sects. 1, 3, 5.3.2 and 6.  相似文献   

8.
Summary The paper first provides a short review of the most common microeconometric models including logit, probit, discrete choice, duration models, models for count data and Tobit-type models. In the second part we consider the situation that the micro data have undergone some anonymization procedure which has become an important issue since otherwise confidentiality would not be guaranteed. We shortly describe the most important approaches for data protection which also can be seen as creating errors of measurement by purpose. We also consider the possibility of correcting the estimation procedure while taking into account the anonymization procedure. We illustrate this for the case of binary data which are anonymized by ‘post-randomization’ and which are used in a probit model. We show the effect of ‘naive’ estimation, i. e. when disregarding the anonymization procedure. We also show that a ‘corrected’ estimate is available which is satisfactory in statistical terms. This is also true if parameters of the anonymization procedure have to be estimated, too. Research in this paper is related to the project “Faktische Anonymisierung wirtschaftsstatistischer Einzeldaten” financed by German Ministry of Research and Technology.  相似文献   

9.
Summary: We compare information on the length of unemployment spells contained in the IAB employment subsample (IABS) and in the German Socio-Economic Panel (GSOEP). Due to the lack of information on registered unemployment in the IABS, we use two proxies of unemployment in the IABS as introduced by Fitzenberger/Wilke (2004). The first proxy comprises all periods of nonemployment after an employment spell which contain at least one period with unemployment compensation transfers. The second proxy includes all episodes between two employment spells during which an individual continuously received unemployment benefits. Estimation of standard duration models indicates that conclusions drawn from the IABS and the GSOEP differ in many cases. While the GSOEP suggests that the hazard rate has a maximum at about 12 months of unemployment, the IABS results suggest that this maximum is at about 20 months. Contrary to our GSOEP results and contrary to many results based on the GSOEP found in the literature, we find a statistically significant association between longer maximum entitlement periods of unemployment benefits (‘Arbeitslosengeld’) and longer unemployment durations for men in the IABS. The results for women do not show such clear patterns. The large sample size of the IABS also allows to trace out statistically significant effects of characteristics such as regional and industry indicators, which is generally not possible in the relatively small GSOEP. * Acknowledgements: We would like to thank the editors of this special issue, Joachim M?ller and Bernd Fitzenberger, two anonymous referees, the participants of the ‘Statistische Woche 2004’ in Frankfurt (in particular Reinhard Hujer, Olaf Hübler and Gerd Ronning), seminar participants at the ZEW Mannheim (especially Fran?ois Laisney and Alexander Spermann) and Jennifer Hunt for their many helpful comments and suggestions. All remaining errors are our own. Financial support of the Deutsche Forschungsgemeinschaft (DFG) through the research project ‘Microeconometric modelling of unemployment durations under consideration of the macroeconomic situation’ is gratefully acknowledged. The data used in this paper were made available by the Institute for Employment Research (IAB) at the Federal Labour Office of Germany, Nürnberg, and the German Socio Economic Panel Study (GSOEP) at the German Institute for Economic Research (DIW), Berlin.  相似文献   

10.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best’ representation, which we shall explore.  相似文献   

11.
12.
An improved estimator to analyse missing data   总被引:1,自引:0,他引:1  
Missing data due to nonresponse, though undesirable, is a reality of any survey. In this paper we consider a situation in which, at a given time, observations are missing for one of the several auxiliary characteristics; thus the ‘missing’ phenomenon occurs for the characteristics separately but not simultaneously. A new method, making use of all the available observations, is proposed. A simulation study based on three real populations was performed to test the proposed technique.  相似文献   

13.
The label switching problem is caused by the likelihood of a Bayesian mixture model being invariant to permutations of the labels. The permutation can change multiple times between Markov Chain Monte Carlo (MCMC) iterations making it difficult to infer component-specific parameters of the model. Various so-called ‘relabelling’ strategies exist with the goal to ‘undo’ the label switches that have occurred to enable estimation of functions that depend on component-specific parameters. Existing deterministic relabelling algorithms rely upon specifying a loss function, and relabelling by minimising its posterior expected loss. In this paper we develop probabilistic approaches to relabelling that allow for estimation and incorporation of the uncertainty in the relabelling process. Variants of the probabilistic relabelling algorithm are introduced and compared to existing deterministic relabelling algorithms. We demonstrate that the idea of probabilistic relabelling can be expressed in a rigorous framework based on the EM algorithm.  相似文献   

14.
Modeling prior information as a fuzzy set and using Zadeh’s extension principle, a general approach is presented how to rate linear affine estimators in linear regression. This general approach is applied to fuzzy prior information sets given by ellipsoidal α-cuts. Here, in an important and meaningful subclass, a uniformly best linear affine estimator can be determined explicitly. Surprisingly, such a uniformly best linear affine estimator is optimal with respect to a corresponding relative squared error approach. Two illustrative special cases are discussed, where a generalized least squares estimator on the one hand and a general ridge or Kuks–Olman estimator on the other hand turn out to be uniformly best.  相似文献   

15.
Summary: This paper deals with item nonresponse on income questions in panel surveys and with longitudinal and cross–sectional imputation strategies to cope with this phenomenon. Using data from the German SOEP, we compare income inequality and mobility indicators based only on truly observed information to those derived from observed and imputed observations. First, we find a positive correlation between inequality and imputation. Secondly, income mobility appears to be significantly understated using observed information only. Finally, longitudinal analyses provide evidence for a positive inter–temporal correlation between item nonresponse and any kind of subsequent nonresponse.* We are grateful to two anonymous referees and to Jan Goebel for very helpful comments and suggestions on an earlier draft of this paper. The paper also benefited from discussions with seminar participants at the Workshop on Item Nonresponse and Data Quality in Large Social Surveys, Basel/CH, October 9–11, 2003.  相似文献   

16.
P-splines regression provides a flexible smoothing tool. In this paper we consider difference type penalties in a context of nonparametric generalized linear models, and investigate the impact of the order of the differencing operator. Minimizing Akaike’s information criterion we search for a possible best data-driven value of the differencing order. Theoretical derivations are established for the normal model and provide insights into a possible ‘optimal’ choice of the differencing order and its interrelation with other parameters. Applications of the selection procedure to non-normal models, such as Poisson models, are given. Simulation studies investigate the performance of the selection procedure and we illustrate its use on real data examples.  相似文献   

17.
Retrospectively collected duration data are often reported incorrectly. An important type of such an error is heaping—respondents tend to round-off or round-up the data according to some rule of thumb. For two special cases of the Weibull model we study the behaviour of the ‘naive estimators’, which simply ignore the measurement error due to heaping, and derive closed expressions for the asymptotic bias. These results give a formal justification of empirical evidence and simulation-based findings reported in the literature. Additionally, situations where a remarkable bias has to be expected can be identified, and an exact bias correction can be performed.  相似文献   

18.
A range of procedures in both robustness and diagnostics require optimisation of a target functional over all subsamples of given size. Whereas such combinatorial problems are extremely difficult to solve exactly, something less than the global optimum can be ‘good enough’ for many practical purposes, as shown by example. Again, a relaxation strategy embeds these discrete, high-dimensional problems in continuous, low-dimensional ones. Overall, nonlinear optimisation methods can be exploited to provide a single, reasonably fast algorithm to handle a wide variety of problems of this kind, thereby providing a certain unity. Four running examples illustrate the approach. On the robustness side, algorithmic approximations to minimum covariance determinant (MCD) and least trimmed squares (LTS) estimation. And, on the diagnostic side, detection of multiple multivariate outliers and global diagnostic use of the likelihood displacement function. This last is developed here as a global complement to Cook’s (in J. R. Stat. Soc. 48:133–169, 1986) local analysis. Appropriate convergence of each branch of the algorithm is guaranteed for any target functional whose relaxed form is—in a natural generalisation of concavity, introduced here—‘gravitational’. Again, its descent strategy can downweight to zero contaminating cases in the starting position. A simulation study shows that, although not optimised for the LTS problem, our general algorithm holds its own with algorithms that are so optimised. An adapted algorithm relaxes the gravitational condition itself.  相似文献   

19.
20.
Despite decades of research in the medical literature, assessment of the attributable mortality due to nosocomial infections in the intensive care unit (ICU) remains controversial, with different studies describing effect estimates ranging from being neutral to extremely risk increasing. Interpretation of study results is further hindered by inappropriate adjustment (a) for censoring of the survival time by discharge from the ICU, and (b) for time-dependent confounders on the causal path from infection to mortality. In previous work (Vansteelandt et al. Biostatistics 10:46–59), we have accommodated this through inverse probability of treatment and censoring weighting. Because censoring due to discharge from the ICU is so intimately connected with a patient’s health condition, the ensuing inverse weighting analyses suffer from influential weights and rely heavily on the assumption that one has measured all common risk factors of ICU discharge and mortality. In this paper, we consider ICU discharge as a competing risk in the sense that we aim to infer the risk of ‘ICU mortality’ over time that would be observed if nosocomial infections could be prevented for the entire study population. For this purpose we develop marginal structural subdistribution hazard models with accompanying estimation methods. In contrast to subdistribution hazard models with time-varying covariates, the proposed approach (a) can accommodate high-dimensional confounders, (b) avoids regression adjustment for post-infection measurements and thereby so-called collider-stratification bias, and (c) results in a well-defined model for the cumulative incidence function. The methods are used to quantify the causal effect of nosocomial pneumonia on ICU mortality using data from the National Surveillance Study of Nosocomial Infections in ICU’s (Belgium).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号