共查询到18条相似文献,搜索用时 62 毫秒
1.
本文提出了一种基于粗糙集的既能学习决策性分类规则也能学习非决策性分类规则的方法,目的是获得更一般、更可靠的分类规则,并设计了相应的算法。算法的基本思想是允许用户在学习分类过程中指定三个参数:最小的支持度、分类规则必须满足的一致度、覆盖度,据此推导出满足参数要求的规则。最后将该方法应用于一个算例中,提取了满足给定参数阈值的规则。该方法在处理噪音数据及大型数据库分析等方面具有较强功能。 相似文献
2.
传统的评价方法在进行评价之前没有进行属性约简,并且对新样本的评价过程过于复杂.鉴于此,运用(S,T)模糊粗糙集对综合评价中的属性约简和规则获取进行了研究,通过属性约简精简评价指标,通过规则获取来简化新样本的评价过程,为新样本的评价提供了一种全新的方法.最后以一个综合评价实例来演示如何进行属性约简和规则获取. 相似文献
3.
已有的数据挖掘模型大多是针对大型商务网站设计的,成本高,技术复杂,难于实现。本文针对为数众多的小型电子商务网站,将粗糙集与数据挖掘结合起来,建立了一个切实可行的参考模型,该模型能够有效地、自主地挖掘电子商务网站的运营状况和潜在的经济规律,从而为小型电子商务运营者提供决策参考。 相似文献
4.
智能查询是决策支持系统的一个重要内容 ,在数据库中挖掘特征规则是实现智能查询的一个重要手段 .目前的数据挖掘方法都是面向具体问题的 ,不能适应用户灵活的查询需要 .本文结合粗糙集理论 ,提出了一个面向查询的挖掘特征规则的新方法 ,利用该方法挖掘出的特征规则可以在精度上达到最优 .以该算法为核心 ,本文设计出一个查询特征规则的类 SQL命令 ,使用户与系统的交互问答更加方便、灵活 相似文献
5.
粗糙集理论以其出色的处理模糊和不确定知识的能力,在数据挖掘领域占据了越来越重要的地位。文章首先描述了粗糙集理论的核心思想,接着介绍了粗糙集理论在不完备信息系统领域的扩充,最后论述了粗糙集理论的应用发展以及未来的研究方向。 相似文献
6.
在PAWLAK提出的冲突系统中,局中人对争端的态度只有3种:肯定、否定或中立。这样的冲突系统提供的信息过于抽象,很难从该模型中了解局中人产生冲突的原因,更无法找到大多数局中人一致同意的方案。经深入研究,认为现实生活中的某些冲突系统,其局中人应具有自己的方案,而整个冲突系统应有领域专家提供的方案。同时,各争端之间可能存在相互制约关系。根据这些特点,在PAWLAK冲突系统模型的基础上,引入每个局中人的信息系统和可行方案、领域专家为冲突系统提供的信息系统和可行方案,以及冲突系统中各争端的约束条件和冲突系统的可行方案,从而得到一个新的基于粗糙集的冲突模型。同时,给出了一个求冲突系统可行方案的算法,并以实例说明新模型能较好地刻画现实在生活的某些冲突系统。 相似文献
7.
本文介绍了数据挖掘和粗糙集的理论,并对粗糙集值约简算法做了研究。该算法通过对信息表条件属性的考察,比较去除该属性后的两条记录是否冲突,重复,实现了去除信息表中的冗余属性值,对信息表进行值约简的目的。在粗糙集值约简理论的基础上,本文以C++Builder6.0作为开发平台,通过对信息表测试,分析、验证,证明其结果是正确的,最后将该算法应用到医疗病人感染数据库中,实现了对医疗病人感染数据库表的值约简。 相似文献
8.
提出了一种基于模糊软分类和粗糙集理论来提取模糊规则的一种算法.该算法分为3个步骤①通过模糊软分类对样本进行模糊划分,得到隶属度矩阵;②运用粗糙集理论和得到的隶属度矩阵生成初始决策表;③简化决策表,提取规则.最后运用该算法提取我国经济增长的模糊规则. 相似文献
9.
对粗糙集表征的属性重要度进行了深入研究,针对原有基于粗糙集理论的属性权重确立方法的不足,综合考察属性集中条件属性的整体重要度和系统中条件属性的个体重要度,提出了新的基于粗糙集理论的属性权重确定方法,提高了方法的普适性和可解释性. 相似文献
10.
本文根据IT项目招标的特点,提出基于粗糙集和灰聚类理论的IT项目评标模型,利用粗糙集理论确定指标权重,结合专家对指标的评价,克服了当前评标方法中存在的主观性大等不足,能够较好反映投标商的整体实力,提高了决策效率和公正性。 相似文献
11.
Since the seminal work of Pawlak (International Journal of Information and Computer Science, 11 (1982) 341–356) rough set theory (RST) has evolved into a rule-based decision-making technique. To date, however, relatively little empirical research has been conducted on the efficacy of the rough set approach in the context of business and finance applications. This paper extends previous research by employing a development of RST, namely the variable precision rough sets (VPRS) model, in an experiment to predict between failed and non-failed UK companies. It also utilizes the FUSINTER discretisation method which neglates the influence of an ‘expert’ opinion. The results of the VPRS analysis are compared to those generated by the classical logit and multivariate discriminant analysis, together with more closely related non-parametric decision tree methods. It is concluded that VPRS is a promising addition to existing methods in that it is a practical tool, which generates explicit probabilistic rules from a given information system, with the rules offering the decision maker informative insights into classification problems. 相似文献
13.
The minimum dominating set of graph has been widely used in many fields, but its solution is NP-hard. The complexity and approximation accuracy of existing algorithms need to be improved. In this paper, we introduce rough set theory to solve the dominating set of undirected graph. First, the adjacency matrix of undirected graph is used to establish an induced decision table, and the minimum dominating set of undirected graph is equivalent to the minimum attribute reduction of its induced decision table. Second, based on rough set theory, the significance of attributes (i.e., vertices) based on the approximate quality is defined in induced decision table, and a heuristic approximation algorithm of minimum dominating set is designed by using the significance of attributes (i.e., vertices) as heuristic information. This algorithm uses forward and backward search mechanism, which not only ensures to find a minimal dominating set, but also improves the approximation accuracy of minimum dominating set. In addition, a cumulative strategy is used to calculate the positive region of induced decision table, which effectively reduces the computational complexity. Finally, the experimental results on public datasets show that our algorithm has obvious advantages in running time and approximation accuracy of the minimum dominating set. 相似文献
14.
数据挖掘技术是实现智能决策支持系统的一个重要手段 ,关联规则是数据挖掘的一个重要内容 .传统的 Apriori算法仅适用于挖掘数据间的定性关联关系 ,但数据间的定量关联关系对决策更有帮助 .属性值的离散映射是挖掘定量关联规则的一个重要环节 ,离散映射中属性值区间的划分粒度是影响数据挖掘质量的一个重要因素 .本文结合粗集理论提出了一个确定属性值划分粒度的方法 ,在此基础上设计出一个挖掘定量关联规则的算法 :Apriori 2 ,利用Apriori 2可以挖掘出大量对决策有帮助的定量关联规则 相似文献
15.
AbstractSustainable supply chain management (SSCM) faces greater complexity because it considers additional stakeholder requirements, broader sustainable performance objectives, increased sustainable business practices and technologies, and relationships among those entities. These additional complexities make SSCM more difficult to manage and operate than traditional supply chains. Complex systems require new methods for research especially given reductionist research paradigms of modern science. Rough set theory (RST) can be a valuable tool that will help address complexity in SSCM research and practice. To exemplify RST usefulness and applicability, an illustrative application using sustainable supply chain practices (SSCP), and environmental and economic performance outcomes is introduced. The conceptual case provides nuanced insights for researchers and practitioners in mitigating and evaluating various SSCM complexities. RST limitations and extensions are introduced. 相似文献
16.
证据理论是处理不确定性问题的有力工具,它处理的证据来源于专家.专家的知识经验是有限的,获取较困难,且可能存在一定的主观性.针对上述问题,提出了一种基于粗糙集理论的证据获取的新方法,并对证据合成和应用进行了研究.首先研究了大型决策表分解问题.利用粗糙集理论分析条件属性间的依赖关系,对条件属性集进行聚类,形成多个条件属性集相对独立的子决策表;其次对各子决策表进行分析,利用粗糙集的分类思想和隶属度概念,计算证据的基本可信度分配;最后文章对证据的合成及其在决策分析中的应用进行了研究,提出了相应的解决方法. 相似文献
17.
This paper proposes a column generation approach for the Point-Feature Cartographic Label Placement problem (PFCLP). The column
generation is based on a Lagrangean relaxation with clusters proposed for problems modeled by conflict graphs. The PFCLP can
be represented by a conflict graph where vertices are positions for each label and edges are potential overlaps between labels
(vertices). The conflict graph is decomposed into clusters forming a block diagonal matrix with coupling constraints that
is known as a restricted master problem (RMP) in a Dantzig-Wolfe decomposition context. The clusters’ sub-problems are similar
to the PFCLP and are used to generate new improved columns to RMP. This approach was tested on PFCLP instances presented in
the literature providing in reasonable times better solutions than all those known and determining optimal solutions for some
difficult large-scale instances. 相似文献
18.
Surveillance of hospital-acquired infections, especially those caused by antibiotic resistant bacteria, is an important component of hospital infection control. A computer program for this purpose experienced a combinatorial computational explosion in time and space when processing data describing certain multi-drug resistant organisms. The blowup occurred while the program was generating frequent sets, a common phase in data mining algorithms. We present a modified algorithm for computing frequent sets that more efficiently handles the computational burden. The algorithm's proof of correctness involves the concepts of closure, independent sets, and circuits in a space more general than a matroid. Of central concern in the theory are inferences about a closure operation that can be obtained from limited information about the circuits. 相似文献
|