首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
一致性问题和专家权重是群组赋权面临的两个主要问题.文章在现有研究的基础上,提出了基于一致性排序的群组G1法来确定群组专家权重.首先,专家组的专家给出评价指标的主观排序;其次,通过Spearman等级相关系数检验专家排序的一致性,剔除未通过一致性检验的评价指标;最后,通过平均值法、Board法和Copealand法对专家排序进行重排序,根据循环修正的思路确定评价指标的一致性排序,并根据各专家排序和一致性排序的相关性的大小采确定各专家权重,使得专家权重合理地反映了专家差异,保证了指标最终权重反映出群体决策思想.  相似文献   

2.
一种群组决策中专家权重确定的方法与应用   总被引:2,自引:0,他引:2  
本文针对群组决策中多名专家赋权且赋权结果不一致的情况,借助变异系数法和均值的思想,给出了一种确定专家决策权重的方法。该方法的特点在于依据专家与群组其他专家赋权结果以及与群组专家赋权结果平均值之间的一致性程度来确定专家的权重。该方法能够充分利用原始决策信息,从而使评价结果更加客观,因而具有较大的实用价值。最后,通过一个例子说明了该方法的实用性。  相似文献   

3.
为改善传统产业经济绩效评价中的平均赋权问题,文章提出一种基于因子分析法获取产业经济绩效评价指标权重的方法,并实证分析了某地区六个产业的经济绩效水平.采用群组专家评价数据的均值作为评价数据和统计分析样本,降低了评价过程中不确定性.采用因子得分定权法得到评价指标的权重,并给出了产业经济绩效评价模型和提升产业经济绩效的对策.实证结果显示:影响产业经济绩效的关键指标为产业经营效果、投入产出比和产业经济规模.产业经济绩效水平呈逐年上升趋势,但各产业的绩效水平存在较大差异,还存在较大的提升空间.  相似文献   

4.
体育赛事风险评估是保证体育赛事顺利进行的先决条件和必不可少的一步。文章首先分析了并给出了影响体育赛事风险评估六个指标,在利用模糊互补判断矩阵确定评价指标的权重基础上,结合群决策评价结果更具有说服力的原理,提出了一种多人参与评价的体育赛事风险评估模型;在综合群组评价信息的过程中,采用单一专家评价信息和群组综合评价信息离差最小的原理建立了确定群组评价信息的广义离差最小的非线性模型。  相似文献   

5.
经济评价研究过程中所采取的方法多种多样,指标权重的确定是不可忽视的重要因素。尽管当前基于数理角度的赋权方法层次不穷,但均未认识到同一指标对不同个体进行评价时权重可能存在的差异。文章提出了一种基于熵—Shapley的样本差异赋权方法,并进行了实例分析。  相似文献   

6.
一种群组决策中专家权重确定的新方法   总被引:1,自引:0,他引:1  
文章针对群组决策中多名专家赋权且赋权结果不一致的情况,借鉴变异系数法思想,给出了一种确定专家决策权重的方法。该方法的特点在于依据群组专家赋权结果的一致性程度来确定专家的权重。该方法赋权直接,计算简便,合乎实际,因而具有较大的实用价值。最后,文章通过一个例子说明了该方法的实用性。  相似文献   

7.
用熵值法进行综合评价如何处理极端值   总被引:2,自引:0,他引:2  
郭显光 《浙江统计》1997,(10):19-21
在多指标综合评价中,确定指标权重的方法主要有主观赋权法和客观赋权法.主观赋权法是一类根据评价者主观上对各指标的重视程度来决定权重的方法;客观赋权法所依据的赋权原始信息来源于客观环境,它根据各指标的联系程度或各指标所提供的信息量染决定指标的权重.客观同权法有烟值法、主成分分析法、因子分析法、复相关系数法等等,本文针对均值法在应用中遇到的一些问题提出解决办法。一、熵值法的原理设有m个待评方案,n项评价指标,形成原始指标数据矩阵x=(Xij)mxn,对于某项指标xi,指标值xij的差距越大,则该指标在综合评价中所起…  相似文献   

8.
在复杂、多元的期刊评价过程中经常涉及各评价指标的赋权问题。不同的评价指标赋权方案直接影响期刊评价的结果,因此如何确定指标权重是一个重要问题。文章基于CRITIC法的赋权原理,结合层次分析法,提出了一种全新的主客观赋权法相结合的权重确定方法,即结构CRITIC法。采用2022年《中国学术期刊影响因子年报(自然科学与工程技术)》中17种数学学术期刊的数据,使用包括原始的CRITIC法在内的其他三种赋权方法进行对比分析。结果表明,主观赋权法和客观赋权法赋权结果的差异性影响了评价结果的公认性;赋权方法的选择应当考虑其适用性;结构CRITIC法是在期刊评价中更能从主观和客观两个方面反映指标权重的计算方法。结构CRITIC法为学术期刊评价提供了一种新的赋权方法。  相似文献   

9.
陈骥等 《统计研究》2019,36(4):106-118
针对群组评价在分配评价个体权重时,由于忽视群组意见分歧以及不考虑个体评价尺度的“非稳定性”而导致的个体权重固化现象,提出了基于自适应变权的群组评价方法。首先,阐述了群组变权的理论依据与基本思路,从个体与群组意见分歧的角度,设计了基于意见偏差的变权机制;其次,以“满意的群组相对一致性水平”为控制条件,设计了群组变权的自适应机制,在不调整群组评价量化数据的基础上,进行个体权重的自适应变权分配与变权综合。最后,应用实际案例演示了该方法的过程;通过调整不同的学习率取值,对比其动态变动特征以分析其可用性。  相似文献   

10.
权重确定是多指标综合评价过程中的关键环节。文章针对主观赋权法中专家对指标重要性判断的随意性及常规客观赋权法的片面性,提出考虑跨期变动趋势的改进熵值赋权方法。该方法考虑了时间因素,从指标数据跨时期的变动趋势中挖掘指标的相对重要性信息,并基于熵理论确定指标的重要程度之比,分层次规划求解各指标权重。最后结合我国新型城镇化水平评价案例,验证了该方法的合理性和有效性。  相似文献   

11.
When data are missing, analyzing records that are completely observed may cause bias or inefficiency. Existing approaches in handling missing data include likelihood, imputation and inverse probability weighting. In this paper, we propose three estimators inspired by deleting some completely observed data in the regression setting. First, we generate artificial observation indicators that are independent of outcome given the observed data and draw inferences conditioning on the artificial observation indicators. Second, we propose a closely related weighting method. The proposed weighting method has more stable weights than those of the inverse probability weighting method (Zhao, L., Lipsitz, S., 1992. Designs and analysis of two-stage studies. Statistics in Medicine 11, 769–782). Third, we improve the efficiency of the proposed weighting estimator by subtracting the projection of the estimating function onto the nuisance tangent space. When data are missing completely at random, we show that the proposed estimators have asymptotic variances smaller than or equal to the variance of the estimator obtained from using completely observed records only. Asymptotic relative efficiency computation and simulation studies indicate that the proposed weighting estimators are more efficient than the inverse probability weighting estimators under wide range of practical situations especially when the missingness proportion is large.  相似文献   

12.
The German Microcensus (MC) is a large scale rotating panel survey over three years. The MC is attractive for longitudinal analysis over the entire participation duration because of the mandatory participation and the very high case numbers (about 200000 respondents). However, as a consequence of the area sampling that is used for the MC, residential mobility is not covered and consequently statistical information at the new residence is lacking in the MC sample. This raises the question whether longitudinal analyses, like transitions between labour market states, are biased and how different methods perform that promise to reduce such a bias. Similar problems occur also for other national Labour Force Surveys (LFS) which are rotating panels and do not cover residential mobility, see Clarke and Tate (2002). Based on data of the German Socio-Economic Panel (SOEP), which covers residential mobility, we analysed the effects of missing data of residential movers by the estimation of labour force flows. By comparing the results from the complete SOEP sample and the results from the SOEP, restricted to the non-movers, we concluded that the non-coverage of the residential movers can not be ignored in Rubin’s sense. With respect to correction methods we analysed weighting by inverse mobility scores and log-linear models for partially observed contingency tables. Our results indicate that weighting by inverse mobility scores reduces the bias to about 60% whereas the official longitudinal weights obtained by calibration result in a bias reduction of about 80%. The estimation of log-linear models for non-ignorable non-response leads to very unstable results.  相似文献   

13.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

14.
Various methods have been suggested in the literature to handle a missing covariate in the presence of surrogate covariates. These methods belong to one of two paradigms. In the imputation paradigm, Pepe and Fleming (1991) and Reilly and Pepe (1995) suggested filling in missing covariates using the empirical distribution of the covariate obtained from the observed data. We can proceed one step further by imputing the missing covariate using nonparametric maximum likelihood estimates (NPMLE) of the density of the covariate. Recently Murphy and Van der Vaart (1998a) showed that such an approach yields a consistent, asymptotically normal, and semiparametric efficient estimate for the logistic regression coefficient. In the weighting paradigm, Zhao and Lipsitz (1992) suggested an estimating function using completely observed records after weighting inversely by the probability of observation. An extension of this weighting approach designed to achieve semiparametric efficient bound is considered by Robins, Hsieh and Newey (RHN) (1995). The two ends of each paradigm (NPMLE and RHN) attain the efficiency bound and are asymptotically equivalent. However, both require a substantial amount of computation. A question arises whether and when, in practical situations, this extensive computation is worthwhile. In this paper we investigate the performance of single and multiple imputation estimates, weighting estimates, semiparametric efficient estimates, and two new imputation estimates. Simulation studies suggest that the sample size should be substantially large (e.g. n=2000) for NPMLE and RHN to be more efficient than simpler imputation estimates. When the sample size is moderately large (n≤ 1500), simpler imputation estimates have as small a variance as semiparametric efficient estimates.  相似文献   

15.
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.  相似文献   

16.
In this paper we present a methodology for the study of multi-dimensional aspects of poverty and deprivation. The conventional poor/non-poor dichotomy is replaced by defining poverty as a matter of degree, determined by the place of the individual in the income distribution. The fuzzy poverty measure proposed is in fact also expressible in terms of the generalised Gini measure. The same methodology facilitates the inclusion of other dimensions of deprivation into the analysis: by appropriately weighting indicators of deprivation to reflect their dispersion and correlation, we can construct measures of non-monetary deprivation in its various dimensions. These indicators illuminate the extent to which purely monetary indicators are insufficient in themselves in capturing the prevalence of deprivation. An important contribution of the paper is to identify rules for the aggregation of fuzzy sets appropriate for the study of poverty and deprivation. In particular, we define a ‘composite’ fuzzy set operator which takes into account whether the sets being aggregated are of a ‘similar’ or a ‘dissimilar’ type. These rules allow us to meaningfully combine income and the diverse non-income deprivation indices at the micro-level and construct what we have termed ‘intensive’ and ‘extensive’ indicators of deprivation. We note that mathematically the same approach can be carried over to the study of persistence of poverty and deprivation over time.  相似文献   

17.
In previous work, non–response adjustments based on calibration weighting have been proposed for estimating gross flows in economic activity status from the quarterly Labour Force Survey. However, even after adjustment there may be residual non–response bias. The weighting is based on estimates of cross–sectional distributions and so cannot adjust for bias if non–response is associated with individual flows between quarters. To investigate this possibility, it was decided to apply models for estimating gross flows when non–response depends on the flows. This paper has two aims: first to describe the many problems encountered when attempting to implement these models; and second to outline a solution to the major problem that arose, namely, that comparing the model results directly with the weighting results was not possible. A simulation study was used to compare the results indirectly and it was tentatively concluded that non–response is not strongly associated with the flows and that the weighting provides an adequate adjustment.  相似文献   

18.
In this paper, we propose an objective principal components weighting scheme for all-time Winter Olympic gold, silver and bronze medals based solely on the number of medals won. Our results suggest that the approximately equal weights be assigned (or the total medal counts be used regardless of color) if all of the three medal types are retained for ranking purposes. When the proposed methodology is tested against five alternative weighting schemes that have been suggested in the literature using the results for the 2010 Vancouver Winter Olympics, we find a significant agreement in the country rankings. Furthermore, our implementation of principal components variable reduction strategy results in the identification of silver as the best representative medal count for parsimonious Winter Olympics rankings.KEYWORDS: Olympic rankings, principal components analysis, variable reduction strategy, medal counts, objective weighting schemeJEL Classifications: C18, C38, C43  相似文献   

19.
Composite quantile regression (CQR) is motivated by the desire to have an estimator for linear regression models that avoids the breakdown of the least-squares estimator when the error variance is infinite, while having high relative efficiency even when the least-squares estimator is fully efficient. Here, we study two weighting schemes to further improve the efficiency of CQR, motivated by Jiang et al. [Oracle model selection for nonlinear models based on weighted composite quantile regression. Statist Sin. 2012;22:1479–1506]. In theory the two weighting schemes are asymptotically equivalent to each other and always result in more efficient estimators compared with CQR. Although the first weighting scheme is hard to implement, it sheds light on in what situations the improvement is expected to be large. A main contribution is to theoretically and empirically identify that standard CQR has good performance compared with weighted CQR only when the error density is logistic or close to logistic in shape, which was not noted in the literature.  相似文献   

20.
Abstract

Estimation of average treatment effect is crucial in causal inference for evaluation of treatments or interventions in biostatistics, epidemiology, econometrics, sociology. However, existing estimators require either a propensity score model, an outcome vector model, or both is correctly specified, which is difficult to verify in practice. In this paper, we allow multiple models for both the propensity score models and the outcome models, and then construct a weighting estimator based on observed data by using two-sample empirical likelihood. The resulting estimator is consistent if any one of those multiple models is correctly specified, and thus provides multiple protection on consistency. Moreover, the proposed estimator can attain the semiparametric efficiency bound when one propensity score model and one outcome vector model are correctly specified, without requiring knowledge of which models are correct. Simulations are performed to evaluate the finite sample performance of the proposed estimators. As an application, we analyze the data collected from the AIDS Clinical Trials Group Protocol 175.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号