首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
一类不确定型多属性决策问题的排序方法   总被引:75,自引:4,他引:75  
研究了属性权重信息完全未知且属性值以区间数形式给出的不确定型多属性决策问题, 给出了区间数决策矩阵的规范化公式. 基于区间数相离度, 给出了求解属性权重的一个简洁 公式, 并提出了一种基于可能度的决策方案排序方法. 最后通过实例说明了该法的实用性和 有效性.  相似文献   

2.
区间数排序的可能度计算模型是学术界一直在不断探索的基础性问题之一。区间数刻画着事物属性特征的取值范围,以往学术界都假设在区间内的取值服从均匀分布。本文将均匀分布推广到一般分布,运用概率论的方法,构建了一个新的区间数排序的可能度计算模型,由此修正了以往关于两个区间数全等的定义,提出了区间数形等的概念,同时进一步修正了可能度的自反性条件和区间数的综合排序方法,并将理论应用于多属性决策问题,给出了基本的决策过程,通过实例决策问题的计算,呈现了新理论和新方法的可行性和合理性,具有很好的推广应用价值。  相似文献   

3.
本文基于经典的Markowitz均值-方差模型,针对市场上不允许卖空的情况,提出了证券投资组合的区间二次规划模型,通过应用区间数排序方法(区间序关系、区间可能度和区间可接受度),给出了两种证券投资组合的区间非线性优化的数学转化模型,从而将不确定性证券投资组合模型转化为确定性的证券投资组合二次规划模型进行求解,并对由本文给出的三种求解方法与传统方法进行了比较。  相似文献   

4.
研究了属性值是区间数并且已知方案偏好信息的多属性群决策问题。建立了每个方案客观偏好值与主观偏好值偏差的相对熵测度矩阵;基于客观信息和方案偏好信息的相对熵建立了属性权重模型;建立了一个新的区间数比较的可能度公式,基于可能度公式给出了方案排序方法,算例说明方法可行性。  相似文献   

5.
基于离差的区间二元语义多属性群决策方法   总被引:1,自引:0,他引:1  
王晓  陈华友  刘兮 《管理学报》2011,8(2):301-305
针对具有多粒度区间语言评价信息的多属性群决策问题,提出一种基于区间二元语义信息处理和离差最大化的群决策方法。该方法首先针对由基本语言评价集表示的区间二元语义信息,采用了多粒度区间语言评价信息一致化的方法;然后对属性权重信息完全未知的情形建立基于离差最大化的目标规划模型,得到一个求解属性权重的公式,从而获得相应的属性权重;再利用区间二元语义的集结算子对评价信息进行加权集成,通过区间二元语义信息的可能度公式对集成结果进行排序和择优;最后由实例分析说明该方法的有效性和可行性。  相似文献   

6.
超效率DEA模型的区间扩展   总被引:10,自引:6,他引:10  
将一种改进的DEA模型-超效率DEA(SE-DEA)模型[1]拓展到区间投入产出情形,得到区间SE-DEA模型。定义了一种反映决策者满意度的区间数序关系。当决策者给定一满意度水平,将区间SE-DEA中的区间不等式约束转化为确定型约束。研究了该满意度水平的另一层含义,即决策者对除被评价决策单元外的其它决策单元的偏好程度,据此将区间SE-DEA中的区间等式约束和区间目标函数转化为确定型。最终将区间SE-DEA转化为某一满意度水平下的确定型SE-DEA,并进行求解。最后将文中方法应用于天津市某4家科研所的效率预测问题之中。  相似文献   

7.
基于熵测度的三参数区间数信息下的TOPSIS决策方法   总被引:2,自引:0,他引:2  
研究基于三参数区间数的排序及决策模型,提出了基于三参数区间数的重心点、中点、长度的可能度排序方法;基于重心点发生可能性最大的三参数区间数分布特点,提出了距离测度公式;建立了基于三参数区间数熵测度的属性权重模型,在此基础上构建了依据TOPSIS思想的不确定性决策框架。算例说明了该方法的步骤和有效性。  相似文献   

8.
权重信息完全未知的灰色多属性群决策方法研究   总被引:2,自引:1,他引:1  
基于灰色系统理论的思想和方法,探讨了决策方案的属性值为区间灰数、属性权重及各决策人的权威权重完全未知的灰色多属性群决策问题。通过分析,引入了个体理想最优方案向量、理想专家、方案间优势度和方案集中方案间优势比较矩阵等概念。根据区间灰数的本质特征,定义了两区间灰数的新的相离度,构建了基于两区间灰数相离度的灰色区间关联系数公式及灰色关联度。建立了基于投影的特征向量法和模糊互补判断矩阵排序方法的两种决策算法,给出的算法避免了权重的计算。文中实例分析说明了所提出的灰色多属性群决策方法的合理性及其算法的有效性。  相似文献   

9.
多属性群决策问题是当今复杂决策环境下的研究热点,而区间直觉模糊数可以从隶属度、非隶属度和犹豫度三方面描述不确定信息,具有很强的表现能力,因此学界基于区间直觉模糊数对多属性群决策问题进行了大量研究。但是现有研究一方面在处理共识达成问题时所广泛采用的"多数原则"策略会导致某些关键信息或知识被视为冲突意见而被修改或忽视;另一方面基于距离的区间直觉模糊数相似性测算方法忽略了犹豫度的影响,会造成评价信息损失。针对上述不足,提出区间直觉模糊数环境下基于犹豫度和相关系数的多属性群决策模型。首先,通过个体犹豫度和群体犹豫度的提出,保证了决策群体内部信息和知识的有效交流,使关键性决策信息或知识不会通过强制修改或删除的方式失效;其次,采用区间直觉模糊数相关系数测量相似度,避免基于距离测算而可能出现的错误结果;继而,将决策主体权重设置为区间直觉模糊数形式,通过计算不同决策者之间的相对优势值和劣势值进行主观赋值,保证了决策者权重的客观性和真实性;此外,结合TOPSIS和线性规划方法求解得到最优属性权重,保证了属性赋值的精确性,最后,通过具体的应用实例以及与当前学界主要研究成果的比较分析,进一步论证了所提模型的优势和创新性。  相似文献   

10.
一种决策者判断一致性的聚类方法   总被引:5,自引:1,他引:5  
对于产量为模糊区间数的生产计划群决策问题,考虑不同产品生产的优先度和决策者权重对决策者判断一致性度量的影响,给出了相对加权一致度的一种计算方法。当群决策的结果不一致时,提出了依据相对加权一致度对决策者进行聚类的方法,并给出了每一类决策者决策结果的综合方法。最后通过算例说明了方法的应用过程。  相似文献   

11.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

12.
In this paper we present two main results about the inapproximability of the exemplar conserved interval distance problem of genomes. First, we prove that it is NP-complete to decide whether the exemplar conserved interval distance between any two genomes is zero or not. This result implies that the exemplar conserved interval distance problem does not admit any approximation in polynomial time, unless P=NP. In fact, this result holds, even when every gene appears in each of the given genomes at most three times. Second, we strengthen the first result under a weaker definition of approximation, called weak approximation. We show that the exemplar conserved interval distance problem does not admit any weak approximation within a super-linear factor of , where m is the maximal length of the given genomes. We also investigate polynomial time algorithms for solving the exemplar conserved interval distance problem when certain constrains are given. We prove that the zero exemplar conserved interval distance problem of two genomes is decidable in polynomial time when one genome is O(log n)-spanned. We also prove that one can solve the constant-sized exemplar conserved interval distance problem in polynomial time, provided that one genome is trivial.  相似文献   

13.
We introduce and analyze expected uncertain utility (EUU) theory. A prior and an interval utility characterize an EUU decision maker. The decision maker transforms each uncertain prospect into an interval‐valued prospect that assigns an interval of prizes to each state. She then ranks prospects according to their expected interval utilities. We define uncertainty aversion for EUU, use the EUU model to address the Ellsberg Paradox and other ambiguity evidence, and relate EUU theory to existing models.  相似文献   

14.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

15.
In the binary single constraint Knapsack Problem, denoted KP, we are given a knapsack of fixed capacity c and a set of n items. Each item j, j = 1,...,n, has an associated size or weight wj and a profit pj. The goal is to determine whether or not item j, j = 1,...,n, should be included in the knapsack. The objective is to maximize the total profit without exceeding the capacity c of the knapsack. In this paper, we study the sensitivity of the optimum of the KP to perturbations of either the profit or the weight of an item. We give approximate and exact interval limits for both cases (profit and weight) and propose several polynomial time algorithms able to reach these interval limits. The performance of the proposed algorithms are evaluated on a large number of problem instances.  相似文献   

16.
A recent paper in this journal (Fann et al., 2012) estimated that “about 80,000 premature mortalities would be avoided by lowering PM2.5 levels to 5 μg/m3 nationwide” and that 2005 levels of PM2.5 cause about 130,000 premature mortalities per year among people over age 29, with a 95% confidence interval of 51,000 to 200,000 premature mortalities per year.(1) These conclusions depend entirely on misinterpreting statistical coefficients describing the association between PM2.5 and mortality rates in selected studies and models as if they were known to be valid causal coefficients. But they are not, and both the expert opinions of EPA researchers and analysis of data suggest that a true value of zero for the PM2.5 mortality causal coefficient is not excluded by available data. Presenting continuous confidence intervals that exclude the discrete possibility of zero misrepresents what is currently known (and not known) about the hypothesized causal relation between changes in PM2.5 levels and changes in mortality rates, suggesting greater certainty about projected health benefits than is justified.  相似文献   

17.
This paper examines inference on regressions when interval data are available on one variable, the other variables being measured precisely. Let a population be characterized by a distribution P(y, x, v, v0, v1), where yR1, xRk, and the real variables (v, v0, v1) satisfy v0vv1. Let a random sample be drawn from P and the realizations of (y, x, v0, v1) be observed, but not those of v. The problem of interest may be to infer E(y|x, v) or E(v|x). This analysis maintains Interval (I), Monotonicity (M), and Mean Independence (MI) assumptions: (I) P(v0vv1)=1; (M) E(y|x, v) is monotone in v; (MI) E(y|x, v, v0, v1)=E(y|x, v). No restrictions are imposed on the distribution of the unobserved values of v within the observed intervals [v0, v1]. It is found that the IMMI Assumptions alone imply simple nonparametric bounds on E(y|x, v) and E(v|x). These assumptions invoked when y is binary and combined with a semiparametric binary regression model yield an identification region for the parameters that may be estimated consistently by a modified maximum score (MMS) method. The IMMI assumptions combined with a parametric model for E(y|x, v) or E(v|x) yield an identification region that may be estimated consistently by a modified minimum‐distance (MMD) method. Monte Carlo methods are used to characterize the finite‐sample performance of these estimators. Empirical case studies are performed using interval wealth data in the Health and Retirement Study and interval income data in the Current Population Survey.  相似文献   

18.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

19.
Leonello Tronti 《LABOUR》1998,12(3):489-513
The recent EU initiatives on employment policy and the “Luxembourg employment strategy” underscore the relevance of benchmarking as an instrument for improving labour market performance and labour policy effectiveness. This paper presents the major results of a research project aimed at exploring the application of benchmarking techniques to convergence between the European labour markets. After synthetically describing the development of benchmarking techniques in the private and the public sectors, the paper addresses the crucial question of defining labour market performance and presents the possibility of creating labour market performance and policy benchmarks through the construction of efficiency frontiers. A second possibility explored is the radar-chart approach, a presentation technique that allows both for single dimension and overall performance monitoring. The paper then faces the problem of broadening the scope of benchmarking from the identification of benchmarks to the understanding of performance gaps through more comprehensive analytical tools. In this area, useful contributions are provided by the employment systems and the transitional labour markets approaches. The concluding section stresses the role of normative decisions, as well as the possible political challenges implied by a thorough application of benchmarking techniques to labour market convergence.  相似文献   

20.
Let G=(V,E) be a graph without isolated vertices. A set SV is a paired-dominating set if every vertex in VS is adjacent to a vertex in S and the subgraph induced by S contains a perfect matching. The paired-domination problem is to determine the paired-domination number, which is the minimum cardinality of a paired-dominating set. Motivated by a mistaken algorithm given by Chen, Kang and Ng (Discrete Appl. Math. 155:2077–2086, 2007), we present two linear time algorithms to find a minimum cardinality paired-dominating set in block and interval graphs. In addition, we prove that paired-domination problem is NP-complete for bipartite graphs, chordal graphs, even for split graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号