共查询到20条相似文献,搜索用时 46 毫秒
1.
确定组合证券投资有效边界的参数线性规划方法 总被引:2,自引:2,他引:2
本文从预期收益率与风险权衡的分析,导出了连续确定组合证券投资有效边界的一种简化参数线性规划方法,研究了不允许卖空情况下有效边界的一般形状,并将有关方法和结论推广到包含一般线性的约束和无风险证券的情况,指出了有效边界出现不可导点的可能性。 相似文献
2.
证券组合投资有效集及有效边界的确定的方法研究 总被引:5,自引:0,他引:5
证券组合投资有效集及有效边界是确定合理投资结构的关键。本文研究了证券组合投资风险函数及有效边界的凹性,提出了将求最小风险的干净人规划问题转化为线性规划问题,并根据其最优基及其灵敏度分析,分段确定有效边界的方法。这种方法使各段有效边界可直接由相应的数学表达式求得,计算量大大减少。 相似文献
3.
4.
有效市场投资组合的识别与确定 总被引:6,自引:0,他引:6
Sharp、Lintner和Mossin发现的资本资产定价模型(CAPM)是一个一般均衡模型,不仅使人们提高了对市场行为的了解,而且还提供了实践上的便利,同时也为评估风险调整中的业绩提供了一种实用的方法。因此CAPM为投资组合分析的多方面的应用提供了一种原始的基础。然而,在1977年RichardRoll对CAPM的检验提出了尖锐的批评,批评的关键之一就是有效市场投资组合是否能得到识别。本文运用自己独创的一种几何方法解决了这个长达二十余年的国际性难题。本文首先把Markowitz模型的有效前沿用投资组合的权重向量表示出来,然后将资本市场线(CML)也用投资组合的权重向量表示出来,再由CML的定义就求出这个有效市场投资组合了。 相似文献
5.
6.
用组合投资理论确定最优比例再保险的一个方法 总被引:2,自引:0,他引:2
运用组合投资理论的均值方差原则,分析了保险公司规避风险的问题;针对比例再保险的不同险种,建立了多目标规划模型并求解,确定了最优自留比例,并将实行再保险后的期望收益和方差与实行前进行了比较,认为结论适合于风险厌恶型的决策者;并对风险分散在证券市场与保险市场的差别进行了比较分析. 相似文献
7.
求解证券组合最优权重的几何方法 总被引:4,自引:0,他引:4
本文通过建立无非负约束和有非负约束条件下证券组合的临界线方程,分别给出了求解允许卖空与限制卖空时证券组合投资最优权重的一种方法。这种方法既可求解给定收益下的证券组合投资最优权重,又可求解给定风险条件下的证券组合投资最优权重。 相似文献
8.
本文讨论了无概率假设的不确定性状态的投资组合问题.提出了不完全信息下证券投资组合的一种线性规则方法-相对极小极大方法,它是一种基于相对风险的度量方法.我们讨论了几种不完全信息下的情形,并提出了相应的投资组合相对极小极大分析原则.文中进一步阐述了相对极小极大方法与Markowitz的均值-方差方法和其它的方法之间的关系. 相似文献
9.
投资组合协方差矩阵的性质与最优组合的选择 总被引:10,自引:3,他引:10
投资组合协方差矩阵的正定性向来被研究人员所默认,从而对于非正定的情形研究不多。本文对协方差矩阵的性质进行了研究,证明了协方差矩阵正定的充分条件,同时深入地分析了非正定条件下的最优组合的选择问题。并指出 :当协方差矩阵非正定时,要么存在套利机会,要么存在有效子集 (即有多余的证券存在 )。 相似文献
10.
投资组合绩效评价是学术界研究的热点问题。本文在经典的经济学框架下,基于真实前沿面,给出了投资组合效率的明确定义。由于实际投资环境的影响,投资组合优化模型非常复杂,难以获得真实前沿面的解析解,这给投资组合效率的应用带来了很大的困难。本文基于投资组合理论,在投资组合模型所对应的前沿面为凹函数的情况下,采用基于数据的投资组合DEA评价模型构造前沿面来逼近真实的前沿面,从而估计一般情形下投资组合的效率。在此基础上研究了考虑交易成本的投资组合效率评价问题,并用实例说明了本文方法的合理性与可行性。 相似文献
11.
Frederick K. Martinson 《决策科学》1993,24(4):809-824
This paper compares two approaches to the solution of weighted multiobjective linear programming problems: the fuzzy linear programming method and the minmax distance metric. The two models produce an identical solution for equally weighted objectives, but the solutions differ when the objectives are unequally weighted. This is due to the underlying meaning of the weights attached to each solution method. The paper illustrates the graphical meaning of the weights and the implications to the decision maker. 相似文献
12.
13.
In this paper, we present a simple algorithm to obtain mechanically SDP relaxations for any quadratic or linear program with bivalent variables, starting from an existing linear relaxation of the considered combinatorial problem. A significant advantage of our approach is that we obtain an improvement on the linear relaxation we start from. Moreover, we can take into account all the existing theoretical and practical experience accumulated in the linear approach. After presenting the rules to treat each type of constraint, we describe our algorithm, and then apply it to obtain semidefinite relaxations for three classical combinatorial problems: the K-CLUSTER problem, the Quadratic Assignment Problem, and the Constrained-Memory Allocation Problem. We show that we obtain better SDP relaxations than the previous ones, and we report computational experiments for the three problems. 相似文献
14.
15.
Frank R. Wondolowski 《决策科学》1991,22(4):792-811
A criticism of linear programming has been that the data which are available in practice are too inexact and unreliable for linear programming to properly work. Managers are therefore concerned with how much actual values may differ from the estimates that were used in the model before the results become irrelevant. Sensitivity analysis emerged to help deal with the uncertainties inherent in the linear programming model. However, the ranges calculated are generally valid only when a single coefficient is varied. An extension of sensitivity analysis, the 100 Percent Rule, allows the simultaneous variation of more than one element in a vector, but does not permit the independent variation of the elements. A tolerance approach to sensitivity analysis enables the consideration of simultaneous and independent change of more than one coefficient. However, the ranges developed are unnecessarily restricted and may be reduced in width to zero when primal or dual degeneracy exists. This paper presents an extension of the tolerance approach which reduces the limitations of both the traditional and tolerance approaches to sensitivity analysis. 相似文献
16.
Patrick L. Brockett Abraham Charnes William W. Cooper Ku-Hyuk Kwon Timothy W. Ruefli 《决策科学》1992,23(2):385-408
Chance constrained programming concepts are used to formalize risk and return relations which are then modeled for use in an empirical study of mutual fund behavior during the period 1984 through 1988. The publicly announced strategies of individual funds are used to form ex ante risk classifications which are employed in examining ex post performance. Negative relations between risk and return held in every year of the period studied. The bearing of these negative risk-return findings for the Bowman paradox, as studied in the strategic management literature, are thus extended from the industrial firms studied by Bowman (and others) and shown to be present even in these investment oriented mutual funds in each of the years of the great bull market from 1984 through 1988. Finally, our use of chance constrained programming enables us to separate risk from return behavior and evaluate their relative strengths as sources of these negative relations, which are found to be more in the returns than the risks. 相似文献
17.
在当今大数据背景下,从实际应用中抽象出来的线性规划问题的规模越来越大,复杂性越来越高,因此数据预处理技术在线性规划问题求解中的重要性日渐突显。对偶性不仅有助于原始问题的算法(如对偶单纯形法)求解,而且是进行算法求解前的预处理步的重要组成部分。针对后者,本文基于有上下界的线性规划模型,详细分析总结了将对偶性应用于预处理中的两种方法:优先列和比例列的处理,并利用无效约束的概念证明了弱优先列的性质,最后应用C语言将预处理方法进行编程实现,以国际通用题库中变量个数大于1500的标准线性规划问题为实例进行测试。实例测试结果表明:(1)对于一般线性规划问题而言,对偶性在预处理中的应用能够有效减小问题规模,一方面体现在直接减少问题的变量数和非零元数,另一方面通过影响其他预处理方法间接减少问题的约束个数;(2)从减小问题规模的角度,对大部分问题而言比例列的预处理效果优于优先列。 相似文献
18.
There are numerous variable selection rules in classical discriminant analysis. These rules enable a researcher to distinguish significant variables from nonsignificant ones and thus provide a parsimonious classification model based solely on significant variables. Prominent among such rules are the forward and backward stepwise variable selection criteria employed in statistical software packages such as Statistical Package for the Social Sciences and BMDP Statistical Software. No such criterion currently exists for linear programming (LP) approaches to discriminant analysis. In this paper, a criterion is developed to distinguish significant from nonsignificant variables for use in LP models. This criterion is based on the “jackknife” methodology. Examples are presented to illustrate implementation of the proposed criterion. 相似文献
19.
In this study, we propose a time-dependent susceptible-exposed-infected-recovered (SEIR) model for the analysis of the SARS-CoV-2 epidemic outbreak in three different countries, the United States, Italy, and Iceland using public data inherent the numbers of the epidemic wave. Since several types and grades of actions were adopted by the governments, including travel restrictions, social distancing, or limitation of movement, we want to investigate how these measures can affect the epidemic curve of the infectious population. The parameters of interest for the SEIR model were estimated employing a composite likelihood approach. Moreover, standard errors have been corrected for temporal dependence. The adoption of restrictive measures results in flatten epidemic curves, and the future evolution indicated a decrease in the number of cases. 相似文献
20.
Paul A. Rubin 《决策科学》1990,21(2):373-386
Recent simulation-based studies of linear programming models for discriminant analysis have used the Fisher linear discriminant function as the benchmark for parametric methods. This article reports experimental evidence which suggests that, while some linear programming models may match or even exceed the Fisher approach in classification accuracy, none of the fifteen models tested is as accurate on normally distributed data as the Smith quadratic discriminant function. At the minimum, further testing is warranted with an emphasis on data sets that arise from significantly non-Gaussian populations. 相似文献