首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Numerical and graphical diagnostic tools are presented for studying the effect of explanatory-variable measurement errors on regression results when very rough knowledge about the measurement-error variances or reliabilities is available. These methods are exhibited on an example of sex discrimination in which male and female salaries are compared after adjustment for individual qualifications but the observed qualification variables are only proxies for what is actually desired. A measurement-error trace is suggested, much like a ridge trace, to exhibit effects of measurement errors of various sizes on the estimate of the adjusted sex difference in log salaries. An approximate Bayesian analysis for the simple functional model is also presented.  相似文献   

2.
我国基本养老保险采取统账结合的制度模式。其中,个人账户采用记账方式建立了参保缴费与养老待遇之间的直接联系。理论上,公平和长期可持续的个人账户模式在记账利率、计发系数、余额继承等方面应遵循精算原则,但我国的个人账户参数设定存在不少违背精算公平和精算平衡原则的错误。本文结合国际经验,在剖析我国养老保险个人账户参数设定错误的基础上,基于精算平衡原理,探讨参数设定方法,检验纠正方案对制度精算公平和精算平衡性的效果,最后提出参数改革的具体建议。主要结论是:个人账户的记账利率应该是制度的内含回报率;名义账户的记账利率不是银行存款利率,而应随缴费工资增长率、人口预期寿命的变动等定期调整;养老金计发系数应基于动态生命表、养老金调整指数和个人账户内含回报率的变动而调整。由此确定的参数纠正方案将有效改善制度的精算公平与精算平衡性。因而建议我国基本养老保险个人账户应尽早明确现收现付的筹资模式和账户余额的权益归属,采用动态的记账利率和计发月数,引入自动平衡机制,实现制度的长期精算平衡。  相似文献   

3.
ABSTRACT

Stepwise regression building procedures are commonly used applied statistical tools, despite their well-known drawbacks. While many of their limitations have been widely discussed in the literature, other aspects of the use of individual statistical fit measures, especially in high-dimensional stepwise regression settings, have not. Giving primacy to individual fit, as is done with p-values and R2, when group fit may be the larger concern, can lead to misguided decision making. One of the most consequential uses of stepwise regression is in health care, where these tools allocate hundreds of billions of dollars to health plans enrolling individuals with different predicted health care costs. The main goal of this “risk adjustment” system is to convey incentives to health plans such that they provide health care services fairly, a component of which is not to discriminate in access or care for persons or groups likely to be expensive. We address some specific limitations of p-values and R2 for high-dimensional stepwise regression in this policy problem through an illustrated example by additionally considering a group-level fairness metric.  相似文献   

4.
行政机关公务员薪酬公平及其影响因素研究   总被引:1,自引:0,他引:1       下载免费PDF全文
张广科 《统计研究》2012,29(1):92-95
我国行政机关公务员的薪酬公平问题引发的社会争议较大。本文基于中部三个省份行政机关公务员的一线薪酬调查数据,对我国行政机关公务员薪酬公平及其影响因素进行了实证研究。模型结果显示,职务级别、机构类别与机构级别因素主导了公务员薪酬的内部不公平,企业中“相当人员”的界定标准影响了公务员薪酬外部公平的实现程度,当地人均财政收入与公务员收入的负相关关系则影响了公务员绩效薪酬公平的实现程度。  相似文献   

5.
我国城镇不同行业职工工资分配公平性测度   总被引:3,自引:0,他引:3       下载免费PDF全文
柏培文 《统计研究》2010,27(3):3-11
我国不同行业间工资差距问题越来越受到理论界关注,但在公平性测度上存在研究不足。为此,本文依据市场效率原则、收益与风险对称原则以及社会公平原则,综合探讨我国城镇职工不同行业间工资分配公平性问题。研究结果表明:我国不同行业职工工资分配确实存在不公平现象,并且当前的不公平程度高于前十年;在不同行业之间,综合工资收益已经形成分层次状态,水利、环境和公共设施管理业,公共管理和社会组织,科、教、文、卫业的职工工资综合收益处于高层次水平;而工业,批发、住宿、租赁、服务业的职工工资综合收益处于底层次水平,其他行业介于二者之间;行业工资差距对我国城镇职工收入总体基尼系数贡献达到1/3。  相似文献   

6.
柏培文 《统计研究》2015,32(10):56-64
本文实证考察上市公司许继电气、格力电气和四川长虹的双层分配关系的公平和效率问题。从公平性来看,许继电气在2003年之前存在资本偏向分配,之后存在劳动偏向分配;格力电器一直存在资本偏向分配;四川长虹主要表现为劳动偏向分配。在管理层和职工之间,许继电气和格力电器存在管理层偏向分配,而四川长虹在2008年之前存在管理层偏向分配,其后存在职工偏向的分配。从收入分配关系影响来看,资劳收入比对企业效率没有显著影响,而高管员工差距对企业效率的存在负面影响。从经验数据来看,当兼顾公平与效率时,许继电气、格力电器和四川长虹综合指数最高年份分别是2004、2003和2006年。鉴于上述情形,企业应当根据现实情况从治理机制上进行改进,实现较好的分配关系。  相似文献   

7.
Traditionally, an assessment for grain yield of rice is to split it into the yield components, including the number of panicles per plant, the number of spikelets per panicle, the 1000-grain weight and the filled-spikelet percentage, such that the yield performance can be individually evaluated through each component, and the products of yield components are employed for grain yield comparisons. However, when using the standard statistical methods, such as the two-sample t-test and analysis of variance, the assumptions of normality and variance homogeneity cannot be fully justified for comparing the grain yields, leading to that the empirical sizes cannot be adequately controlled. In this study, based on the concepts of generalized test variables and generalized p-values, a novel statistical testing procedure is developed for grain yield comparisons of rice. The proposed method is assessed by a series of numerical simulations. According to the simulation results, the proposed method performs reasonably well in Type I error control and empirical power. In addition, a real-life field experiment is analyzed by the proposed method, some productive rice varieties are screened out and suggested for a follow-up investigation.  相似文献   

8.
In this article we examine three concepts of fairness in employment decisions. Two of these concepts are widely known in the literature as “Fairness 1”and “Fairness 2”. The third concept, which we refer to as “Fairness 0”, is defined and introduced here. Fairness 0 applies to the hiring stage, whereas Fairness 1 and Fairness 2 apply to the placement or promotion stages of employment. Our results have important policy implications. We show that the three concepts of fairness can only rarely be achieved simultaneously.  相似文献   

9.
Conventional analyses of a composite of multiple time-to-event outcomes use the time to the first event. However, the first event may not be the most important outcome. To address this limitation, generalized pairwise comparisons and win statistics (win ratio, win odds, and net benefit) have become popular and have been applied to clinical trial practice. However, win ratio, win odds, and net benefit have typically been used separately. In this article, we examine the use of these three win statistics jointly for time-to-event outcomes. First, we explain the relation of point estimates and variances among the three win statistics, and the relation between the net benefit and the Mann–Whitney U statistic. Then we explain that the three win statistics are based on the same win proportions, and they test the same null hypothesis of equal win probabilities in two groups. We show theoretically that the Z-values of the corresponding statistical tests are approximately equal; therefore, the three win statistics provide very similar p-values and statistical powers. Finally, using simulation studies and data from a clinical trial, we demonstrate that, when there is no (or little) censoring, the three win statistics can complement one another to show the strength of the treatment effect. However, when the amount of censoring is not small, and without adjustment for censoring, the win odds and the net benefit may have an advantage for interpreting the treatment effect; with adjustment (e.g., IPCW adjustment) for censoring, the three win statistics can complement one another to show the strength of the treatment effect. For calculations we use the R package WINS, available on the CRAN (Comprehensive R Archive Network).  相似文献   

10.
Regression is the method of choice for analyzing complex salary structures, such as those of university faculty. Unfortunately, not only are there limitations on the data available and shortcomings in the method, but courts do not always understand the evidence presented to them. Statistical analysis can play an important role in uncovering discrimination, but caution is necessary in analysis and presentation.  相似文献   

11.
The paper reviews recent contributions to the statistical inference methods, tests and estimates, based on the generalized median of Oja. Multivariate analogues of sign and rank concepts, affine invariant one-sample and two-sample sign tests and rank tests, affine equivariant median and Hodges–Lehmann-type estimates are reviewed and discussed. Some comparisons are made to other generalizations. The theory is illustrated by two examples.  相似文献   

12.
The objective of this paper is to investigate through simulation the possible presence of the incidental parameters problem when performing frequentist model discrimination with stratified data. In this context, model discrimination amounts to considering a structural parameter taking values in a finite space, with k points, k≥2. This setting seems to have not yet been considered in the literature about the Neyman–Scott phenomenon. Here we provide Monte Carlo evidence of the severity of the incidental parameters problem also in the model discrimination setting and propose a remedy for a special class of models. In particular, we focus on models that are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with the best frequentist selection procedure based on maximization of the marginal likelihood induced by the maximal invariant, or of its Laplace approximation. Results of two Monte Carlo experiments indicate that when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero, unlike what happens for model discrimination based on exact or approximate marginal likelihoods. Finally, two examples with real data sets are given.  相似文献   

13.
The area under the receiver operating characteristic curve (AUC) is the most commonly reported measure of discrimination for prediction models with binary outcomes. However, recently it has been criticized for its inability to increase when important risk factors are added to a baseline model with good discrimination. This has led to the claim that the reliance on the AUC as a measure of discrimination may miss important improvements in clinical performance of risk prediction rules derived from a baseline model. In this paper we investigate this claim by relating the AUC to measures of clinical performance based on sensitivity and specificity under the assumption of multivariate normality. The behavior of the AUC is contrasted with that of discrimination slope. We show that unless rules with very good specificity are desired, the change in the AUC does an adequate job as a predictor of the change in measures of clinical performance. However, stronger or more numerous predictors are needed to achieve the same increment in the AUC for baseline models with good versus poor discrimination. When excellent specificity is desired, our results suggest that the discrimination slope might be a better measure of model improvement than AUC. The theoretical results are illustrated using a Framingham Heart Study example of a model for predicting the 10-year incidence of atrial fibrillation.  相似文献   

14.
Binary response models consider pseudo-R 2 measures which are not based on residuals while several concepts of residuals were developed for tests. In this paper the endogenous variable of the latent model corresponding to the binary observable model is substituted by a pseudo variable. Then goodness of fit measures and tests can be based on a joint concept of residuals as for linear models. Different kinds of residuals based on probit ML estimates are employed. The analytical investigations and the simulation results lead to the recommendation to use standardized residuals where there is no difference between observed and generalized residuals. In none of the investigated situations this estimator is far away from the best result. While in large samples all considered estimators are very similar, small sample properties speak in favour of residuals which are modifications of those suggested in the literature. An empirical application demonstrates that it is not necessary to develop new testing procedures for the observable models with dichotomous regressands. Well-know approaches for linear models with continuous endogenous variables which are implemented in usual econometric packages can be used for pseudo latent models. An erratum to this article is available at .  相似文献   

15.
Fantasy sports, particularly the daily variety in which new lineups are selected each day, are a rapidly growing industry. The two largest companies in the daily fantasy business, DraftKings and Fanduel, have been valued as high as $2 billion. This research focuses on the development of a complete system for daily fantasy basketball, including both the prediction of player performance and the construction of a team. First, a Bayesian random effects model is used to predict an aggregate measure of daily NBA player performance. The predictions are then used to construct teams under the constraints of the game, typically related to a fictional salary cap and player positions. Permutation based and K-nearest neighbors approaches are compared in terms of the identification of “successful” teams—those who would be competitive more often than not based on historical data. We demonstrate the efficacy of our system by comparing our predictions to those from a well-known analytics website, and by simulating daily competitions over the course of the 2015–2016 season. Our results show an expected profit of approximately $9,000 on an initial $500 investment using the K-nearest neighbors approach, a 36% increase relative to using the permutation-based approach alone. Supplementary materials for this article are available online.  相似文献   

16.
Summary.  Official employment-related performance indicators in UK higher education are based on the population of students responding to the 'First destination supplement' (FDS). This generates potentially biased performance indicators as this population of students is not necessarily representative of the full population of leavers from each institution. University leavers who do not obtain qualifications and those who do not respond to the FDS are not included within the official analysis. We compare an employment-related performance indicator based on those students who responded to the FDS with alternative approaches which address the potential non-random nature of this subgroup of university leavers.  相似文献   

17.
In the first part of this article, we briefly review the history of seasonal adjustment and statistical time series analysis in order to understand why seasonal adjustment methods have evolved into their present form. This review provides insight into some of the problems that must be addressed by seasonal adjustment procedures and points out that advances in modem time series analysis raise the question of whether seasonal adjustment should be performed at all. This in turn leads to a discussion in the second part of issues involved in seasonal adjustment. We state our opinions about the issues raised and review some of the work of other authors. First, we comment on reasons that have been given for doing seasonal adjustment and suggest a new possible justification. We then emphasize the need to define precisely the seasonal and nonseasonal components and offer our definitions. Finally, we discuss criteria for evaluating seasonal adjustments. We contend that proposed criteria based on empirical comparisons of estimated components are of little value and suggest that seasonal adjustment methods should be evaluated based on whether they are consistent with the information in the observed data. This idea is illustrated with an example.  相似文献   

18.
In the first part of this article, we briefly review the history of seasonal adjustment and statistical time series analysis in order to understand why seasonal adjustment methods have evolved into their present form. This review provides insight into some of the problems that must be addressed by seasonal adjustment procedures and points out that advances in modern time series analysis raise the question of whether seasonal adjustment should be performed at all. This in turn leads to a discussion in the second part of issues invloved in seasonal adjustment. We state our opinions about the issues raised and renew some of the work of our authors. First, we comment on reasons that have been given for doing seasonal adjustment and suggest a new possible justification. We then emphasize the need to define precisely the seasonal and nonseasonal components and offer our definitions. Finally, we discuss our criteria for evaluating seasonal adjustments. We contend that proposed criteria based on empirical comparisons of estimated components are of little value and suggest that seasonal adjustment methods should be evaluated based on whether they are consistent with the information in the observed data. This idea is illustrated with an example.  相似文献   

19.
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the P‐value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Instead of requiring a prior distribution, the DI satisfies a worst‐case minimax prediction criterion. Replacing a (possibly pseudo‐) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage side information, either in data associated with comparisons other than the comparison at hand or in the parameter value of a simple null hypothesis. Two case studies, one involving multiple populations and the other involving multiple biological features, indicate that the DI is robust to the type of side information used when that information is assigned the weight of a single observation. Such robustness suggests that very little adjustment for multiple comparisons is warranted if the sample size is at least moderate. The Canadian Journal of Statistics 39: 610–631; 2011. © 2011 Statistical Society of Canada  相似文献   

20.
We examine the orthogonality assumption of seasonal and nonseasonal components for official quarterly unemployment figures in Germany and the United States. Although nonperiodic correlations do not seem to reject the orthogonality assumption, a periodic analysis based on correlation functions that vary with the seasons indicates the violation of orthogonality. We find that the unadjusted data can be described by periodic autoregressive models with a unit root. In simulations we replicate the empirical findings for the German data, where we use these simple models to generate artificial samples. Multiplicative seasonal adjustment leads to large periodic correlations. Additive adjustment leads to smaller ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号