首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A multiproduct cost-volume-profit model is extended to incorporate sales or product capacity limits for each product, required rate of return on sales, and the tax rate depending on profit levels as determined by the government. The model also deals with overhead costs that are traceable to a group of products but which cannot be allocated to individual products within the group to any reasonable accuracy. To solve the model, an algorithm is constructed to determine the required volume for each product that will achieve the best possible rate of return on sales revenue. Based upon the dBase Database Management System and the CVP model, a user-friendly menu-driven interactive decision support system is developed. The model algorithm and decision support system is illustrated with an example consisting of five products. Various reports generated by the interactive decision support system are also presented.  相似文献   

2.
扩展概率语言词集作为一种更具通用性的语言信息表示模型,能够更加充分地描述原始评价信息,提高语言多属性决策的科学性。鉴于此,本文针对扩展概率语言环境下的多属性群决策问题,提出一种基于共识模型和ORESTE方法的多属性群决策方法。首先,给出了扩展概率语言词集的概念以及相关理论。其次,考虑到群决策过程中专家群体因知识背景以及素质能力的不同从而给出不同的评价信息导致群体意见不一致的情况,提出了扩展概率语言环境下的共识模型。再次,鉴于多数情况下备选方案间不存在单一排序顺序,本文对经典的ORESTE方法进行改进,提出扩展概率语言ORESTE方法。基于本文提出的扩展概率语言共识模型和扩展概率语言ORESTE方法,提出了扩展概率语言多属性群决策方法。最后,为了验证本文提出方法的有效性和合理性,采用共享单车设计方案评价算例进行分析,并通过与其他方法的对比分析说明本文提出方法的优越性。  相似文献   

3.
This paper analyzes the dealership credit limit problem in terms of the valuation of a Markov process of cash flows with sequential credit decisions over an infinite planning horizon. The formulation distinguishes between the upper bound on credit applicable at the account formation stage and the upper bound applicable to periodic reorders. The result is a closed form solution to the problem which serves as a criterion function for approving or denying credit on a customer-by-customer basis. Data for a sample of manufacturing firms are employed to estimate typical ranges for criterion function parameters. Upper bounds on credit limits are then calculated and graphically presented for median parameter values as well as for values at the 5th and 95th percentiles for the sample data. Finally, an empirical study is conducted of actual trade credit extended by firms. The results support the hypothesis that the variables in the decision model are important determinants of the amount of trade credit outstanding.  相似文献   

4.
This paper characterizes empirically achievable limits for time series econometric modeling and forecasting. The approach involves the concept of minimal information loss in time series regression and the paper shows how to derive bounds that delimit the proximity of empirical measures to the true probability measure (the DGP) in models that are of econometric interest. The approach utilizes joint probability measures over the combined space of parameters and observables and the results apply for models with stationary, integrated, and cointegrated data. A theorem due to Rissanen is extended so that it applies directly to probabilities about the relative likelihood (rather than averages), a new way of proving results of the Rissanen type is demonstrated, and the Rissanen theory is extended to nonstationary time series with unit roots, near unit roots, and cointegration of unknown order. The corresponding bound for the minimal information loss in empirical work is shown not to be a constant, in general, but to be proportional to the logarithm of the determinant of the (possibility stochastic) Fisher–information matrix. In fact, the bound that determines proximity to the DGP is generally path dependent, and it depends specifically on the type as well as the number of regressors. For practical purposes, the proximity bound has the asymptotic form (K/2)log n, where K is a new dimensionality factor that depends on the nature of the data as well as the number of parameters in the model. When ‘good’ model selection principles are employed in modeling time series data, we are able to show that our proximity bound quantifies empirical limits even in situations where the models may be incorrectly specified. One of the main implications of the new result is that time trends are more costly than stochastic trends, which are more costly in turn than stationary regressors in achieving proximity to the true density. Thus, in a very real sense and quantifiable manner, the DGP is more elusive when there is nonstationarity in the data. The implications for prediction are explored and a second proximity theorem is given, which provides a bound that measures how close feasible predictors can come to the optimal predictor. Again, the bound has the asymptotic form (K/2)log n, showing that forecasting trends is fundamentally more difficult than forecasting stationary time series, even when the correct form of the model for the trends is known.  相似文献   

5.
This study addresses the part-machine grouping problem in group technology, and evaluates die performance of several cell formation methods for a wide range of data set sizes. Algorithms belonging to four classes are evaluated: (1) array-based methods: bond energy algorithm (BEA), direct clustering analysis (DCA) and improved rank order clustering algorithm (ROC2); (2) non-hierarchical clustering method: ZODIAC; (3) augmented machine matrix methods: augmented p-median method (APM) and augmented linear clustering algorithm (ALC); and (4) neural network algorithms: ART1 and variants: ART1/KS, ART1/KSC, and Fuzzy ART. The experimental design is based on a mixture-model approach, utilizing replicated clustering. The performance measures include Rand Index and bond energy recovery ratio, as well as computational requirements for various algorithms. Experimental factors include problem size, degree of data imperfection, and algorithm tested. The results show that, among the algorithms applicable for large, industry-size data sets, ALC and neural networks are superior to ZODIAC, which in turn is generally superior to array-based methods of ROC2 and DCA.  相似文献   

6.
本文对二元混合分布模型进行了拓展,提出了一种能捕捉非对称效应的二元混合分布模型,并利用该模型对中国股票市场进行了实证研究。研究结果显示,1997年之前,在中国股市中相同强度的冲击,收益为正的冲击与交易量放量的冲击对波动的冲击分别大于收益为负的冲击和交易量缩量的冲击;而1997年后情况则相反。另外,本文的实证结果还表明,二元混合分布模型能够捕捉收益波动的持续性特征。  相似文献   

7.
本文针对待聚类对象的多层次聚类指标权重配置问题进行了研究。首先运用向量空间模型将聚类对象表征为包含多个层次聚类属性指标的特征空间向量并基于余弦距离测算底层属性指标的相似程度,然后根据聚类指标的层次结构以及相应各层指标的权重系数综合测算对象之间的相似程度,最后根据历史聚类案例中相同类别对象之间相似度较大,不同类别对象之间相似度较小等特点,构建了基于案例学习的多层次聚类指标客观权重极大熵挖掘模型。通过案例分析以及与其他方法的比较研究,证明了本模型的可行性与有效性,为多层次聚类指标客观赋权问题提供了一种新的研究思路。  相似文献   

8.
基于模糊聚类和模糊模式识别的企业财务预警   总被引:2,自引:0,他引:2  
郭德仁  王培辉 《管理学报》2009,6(9):1194-1197,1235
在回顾国内学者关于企业财务预警模型研究的基础上,提出了一种新的预警模型--基于模糊聚类和模糊模式识别模型.利用该模型对训练样本进行模糊聚类,计算最优聚类中心,对待估样本所属类别进行模糊模式识别.通过对40家沪市上市公司进行实证分析,取得了较好的预警效果.最后针对模型存在的问题提出进一步的研究方向.  相似文献   

9.
一种基于信息熵与K均值迭代模型的模糊聚类算法   总被引:1,自引:0,他引:1  
本文提出了基于信息熵和K均值算法混合迭代模糊聚类的细分模型,解决了模糊聚类的原型初始化参数问题。将信息熵和K均值算法引入模糊聚类中进行分析,并结合测试样本数据进行实际分析,与传统方法相比,取得了较好的效果。  相似文献   

10.
The part family problem in group technology can be stated as the problem of finding the best grouping of parts into families such that the parts within each family are as similar to each other as possible. In this paper, the part family formation problem is considered. The problem is cast into a hard clustering model, and the k-means algorithm is proposed for solving it. Preliminary computational experience on the algorithm is very encouraging and it shows that real-life problems of large sizes can efficiently be handled by this approach.  相似文献   

11.
基于时间特性的中国股市交易集群性特征的研究   总被引:3,自引:0,他引:3  
本文以自回归条件久期模型(ACD)为基础,选择标志我国股市交易集群性特征的代理变量,建立了刻画该特征的实证模型,检验了我国上海证券所个股的交易过程中集群性问题.实证结果表明,我国证券市场交易过程中的集群性是由于以私人信息为基础的信息交易所引起的,私人信息的进入导致了证券市场在时间方向表现出更大的波动性.  相似文献   

12.
Fundamental problems in data mining mainly involve discrete decisions based on numerical analyses of data (e.g., class assignment, feature selection, data categorization, identifying outlier samples). These decision-making problems in data mining are combinatorial in nature and can naturally be formulated as discrete optimization problems. One of the most widely studied problems in data mining is clustering. In this paper, we propose a new optimization model for hierarchical clustering based on quadratic programming and later show that this model is compact and scalable. Application of this clustering technique in epilepsy, the second most common brain disorder, is a case point in this study. In our empirical study, we will apply the proposed clustering technique to treatment problems in epilepsy through the brain dynamics analysis of electroencephalogram (EEG) recordings. This study is a proof of concept of our hypothesis that epileptic brains tend to be more synchronized (clustered) during the period before a seizure than a normal period. The results of this study suggest that data mining research might be able to revolutionize current diagnosis and treatment of epilepsy as well as give a greater understanding of brain functions (and other complex systems) from a system perspective. This work was partially supported by the NSF grant CCF 0546574 and Rutgers Research Council grant-202018.  相似文献   

13.
The ability to accurately forecast and control inpatient census, and thereby workloads, is a critical and long‐standing problem in hospital management. The majority of current literature focuses on optimal scheduling of inpatients, but largely ignores the process of accurate estimation of the trajectory of patients throughout the treatment and recovery process. The result is that current scheduling models are optimizing based on inaccurate input data. We developed a Clustering and Scheduling Integrated (CSI) approach to capture patient flows through a network of hospital services. CSI functions by clustering patients into groups based on similarity of trajectory using a novel semi‐Markov model (SMM)‐based clustering scheme, as opposed to clustering by patient attributes as in previous literature. Our methodology is validated by simulation and then applied to real patient data from a partner hospital where we demonstrate that it outperforms a suite of well‐established clustering methods. Furthermore, we demonstrate that extant optimization methods achieve significantly better results on key hospital performance measures under CSI, compared with traditional estimation approaches, increasing elective admissions by 97% and utilization by 22% compared to 30% and 8% using traditional estimation techniques. From a theoretical standpoint, the SMM‐clustering is a novel approach applicable to any temporal‐spatial stochastic data that is prevalent in many industries and application areas.  相似文献   

14.
针对传统基于判断矩阵的专家模糊核聚类赋权方法,由于归一化条件的制约,导致离群点对聚类结果产生不良影响的问题,提出一种改进型模糊核聚类算法。该方法在聚类过程中,通过放宽归一化约束条件,削弱离群点对聚类结果的影响;并且针对传统基于信息熵与一致性系数线性耦合的聚类标准的局限性,提出一种基于偏差熵的赋权方法,依据专家对自身类别的聚类贡献度,确定专家权重,克服了传统方法的不足。算例表明,该方法可行、有效。  相似文献   

15.
基于灰色聚类的社会评价模型及省辖市的实证   总被引:1,自引:0,他引:1  
根据"坚持以人为本,全面、协调、可持续发展"的科学发展观的内涵,根据人民生活质量,教育卫生等五个准则层构建了社会综合评价指标体系。利用三角白化权函数理论,建立了基于灰色聚类的社会综合评价模型,并采用辽宁省14个地区的截面数据进行了实证研究。本文的创新与特色一是通过将可观测的国民幸福指数纳入评价体系,反映了人们对自身生存和发展状况的感受和体验,改变了现有评价没有考虑人民幸福程度的缺点。二是通过人均可支配收入和最低生活保障线等可获得数据指标计算准基尼系数,间接反映了社会发展过程中的收入分配差距现象,解决了现阶段因辽宁省未统计基尼系数指标而无法对社会发展进行科学评价的问题。三是通过在灰色聚类评价模型中通过熵权法确定指标权重使权重结果客观并唯一,避免了主观赋权方法得出的因人而异的现象。四是运用灰色聚类模型对辽宁省各地区社会发展进行聚类评价,有效地揭示了辽宁省各地区社会发展的不平衡性特征。  相似文献   

16.
Organizations invest in technology with the expectation that it will contribute to performance, and members of the organization must use technology for it to make a contribution. For this reason, it is important for managers and designers to understand and predict system use. This paper develops a model of workstation use in a field setting where the use of the system is an integral part of the user's job. The model is based on the Technology Acceptance Model (TAM), which we extended to include social norms, user performance, and two control variables. Brokers and sales assistants in the privateclient group of a major investment bank provided data to test our extended model. The core perception variables in TAM do not predict use in this study. Social norms and one's job requirements are more important in predicting use than workers' perceptions about ease of use and usefulness. The paper discusses the implications of these findings and suggests directions for future research.  相似文献   

17.
隐性债务问题是养老保险制度改革过程中需要面对的重要问题。本文在前人精算测算模型基础上,指出其在数据选取、参数设定、最新制度应用等方面的不足,从三个方面扩展了精算测算研究:结合最新法规对测算模型进行完善,考虑了未来工资增长率与历史工资增长率的区别,修改了养老金调整系数;在测算模型中考虑了过渡性养老金与基础性养老金领取特点的区别;利用最新、最合理的数据进行实证分析。通过研究本文发现,保持合理的工资增长率水平与利率水平可以保证隐性负债规模实际可控,确保隐性负债占财政收入和GDP比例在安全线以内。为此,本文建议:维持温和的工资增长,保证隐性债务不过高的同时兼顾社会福利的提高;尽量提高社会养老金运作的投资收益率,但同时应当权衡提高收益率的成本与其降低的隐性债务之间的利弊。  相似文献   

18.
本文基于复杂网络的局部聚类系数改进了传统的全局最小方差投资组合模型。首先通过股票对数收益率的相关系数矩阵构造股票关联网络,然后计算股票关联网络的局部聚类系数,最后通过全局最小方差模型确定最佳投资组合。将改进后的模型应用于A股市场,经过夏普比率、信息比率和欧米茄比率的对比分析得出改进后的投资组合模型在样本外的表现优于传统的全局最小方差投资组合模型。  相似文献   

19.
针对金融资产回报时间序列的尖峰厚尾性和波动集聚性,提出了基于AR(1)-GARCH(1,1)模型与幂律型分布相结合计算VaR的方法。用GARCH模型对时间序列建模刻画波动集聚性,用基于幂律型分布的扩展形式拟合GARCH模型的残差分布尾部,刻画回报时间序列的厚尾特征,二者结合更好地描述回报时序的动态波动现象。对上证综指进行实证分析,结果表明本文提出的方法比基于正态分布的GARCH模型和静态幂律尾法更精确。  相似文献   

20.
灰聚类分析结果灰性的测度   总被引:6,自引:1,他引:6  
在引入了灰聚类权序列的熵的基础上,指出了灰聚类分析结果与聚类权序列熵的关系。给出了灰聚类分析结果灰性测度方法,并研究了此灰生测度的性质。灰聚类分析结果灰性的量化能够帮助应用者深入认识灰聚类分析的结果。通过实例进一步阐明了计算灰聚类分析结果灰性测度的步骤和实际意义。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号