首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
一类应急物资调度的优化模型研究   总被引:38,自引:1,他引:37  
本文讨论物资需求约束条件下多出救点的紧急物资调度问题。根据连续应急问题的特点,给出了应急时间最早前提下出救点数目最少以及限制期条件下出救点数目最少的应急模型。并且从理论上证明了模型求解方法的正确性。  相似文献   

2.
中国银行间债券市场国债回购利率随机行为的实证研究   总被引:2,自引:0,他引:2  
吕兆友 《管理科学》2004,17(6):62-66
遵循CKLS的思路,采用广义距方法,选用我国国债月度回购利率数据估计和比较了不同的短期无风险利率的连续时间模型,结果显示,根据所研究的数据样本,允许利率的波动性依赖于利率水平的模型可以更好地描述短期利率的动态变化,对于不同利率模型的研究对利率风险的套期保值也具有重要的启示.  相似文献   

3.
在常规时间变换研究中所采用的从属概念是建立在独立条件上的,但是价格和成交量之间却是相关的.为了能够将成交量作为价格的随机时间,推广了从属概念,提出了相关从属的定义.相关从属扩大了时间变换研究的范围.继而讨论了相关从属下过程的扩散性质,研究了经济时间上的资产定价问题,结果类似于资本资产定价模型和套利定价模型.最后,利用零...  相似文献   

4.
介绍期权定价的离散型模型—二叉树模型.通过讨论期权定价的一期模型,得到一期模型的期权定价公式及其性质.在此基础上,讨论了期权定价的二期模型,得到二期模型的期权定价公式及其性质.  相似文献   

5.
本文建立如下风险模型:Un=M+Sn-Yn(初始资金M,保费收入Sn是复合泊松过程,打折索赔过程Yn)。应用大偏差原理,讨论了离散时间风险模型在有限时间t内的破产概率上界估计。  相似文献   

6.
经典破产模型是风险理论中一个重要内容,对保险公司的风险管理预警具有重要作用。目前国内相关书籍对破产模型的分析比较分散,本文对经典破产模型的连续时间情形进行了总结和整理,提供一个较为清晰的逻辑思路。  相似文献   

7.
当金融时间序列具有分布的厚尾性、波动的集聚性等特征时,传统的方法将难以胜任对风险的准确度量.除了讨论在厚尾分布下如何应用条件极值与无条件极值来度量风险外,还利用历史模拟、风险矩阵、条件正态模型等方法进行风险值估计,最后运用五项评估指标对各种模型的预测效果进行了比较分析.  相似文献   

8.
已有文献在研究抵押贷款支持证券的定价时,所假设的利率随机过程大多是连续的,没有考虑到利率受人为或突发事件的干扰而产生跳跃不连续的情形.本文利用跳跃-扩散模型模拟利率随机过程,结合我国借款人行为特点建立提前偿还比例危险模型,运用Monte Carlo模拟方法,研究了浮动利率抵押贷款支持证券的定价,讨论了利率模型各参数的变化对定价的影响.经模拟发现:利率跳跃的频率、跳跃幅度的波动越大,证券价格越大,而利率跳跃幅度的均值越大,证券价格却越小.  相似文献   

9.
本文运用残差中心化方法对笔者和桑德斯1995年一项研究中的数据重新进行了分析,意在消除原研究中由于变量间的共线性对回归模型的影响,并检验Aaker和Keller品牌延伸模型的有效性。结果发现,消费者对品牌延伸的态度主要受A&K模型中主变量的影响,较少受模型中交互项的影响。文章对研究结果进行了初步的讨论。  相似文献   

10.
提出一种基于小波域隐马尔可夫模型的时间序列分析方法.首先介绍了离散小波变换;并针对小波系数进行统计建模,分别讨论了单个小波系数的混合高斯模型、不同尺度小波系数之间的隐马尔可夫树结构、模型训练及似然计算等问题;其次,提出了关于时间序列插值、平滑和预测的统一数学模型,并运用极大后验概率估计和贝叶斯原理,将小波域隐马尔可夫模型作为先验知识给出了一种分析时间序列的新方法;然后,详细推导了时间序列重建问题的Euler-Lagrange方程及对数似然的导数计算,将时间序列的插值、平滑和预测归结为一个简单线性方程的求解;最后通过期望极大化(EM)算法和共扼梯度算法进行交替迭代来计算小波域隐马尔可夫模型参数和重建时间序列.实验结果表明该方法在经济领域时间序列分析中的有效性.  相似文献   

11.
金融数学模型   总被引:15,自引:5,他引:10  
数学模型对于金融市场中的交易者有着非常重要的作用,数学模型应用于金融市场研究的重大突破是证券组合投资模型和金融衍生工具定价模型的出现,资本资产定价模型是由此发展起来的具有重大应用价值的金融数学模型。这些模型的发展和应用仍是当今金融领域的研究热点问题。本文将概括性地介绍一些模型和它们的应用。  相似文献   

12.
SCD模型与ACD模型比较研究   总被引:1,自引:0,他引:1  
耿克红  张世英 《管理学报》2008,5(1):44-48,117
针对近几年在研究金融市场超高频序列时出现的ACD模型和SCD模型,先从理论上探讨了ACD模型、SCD模型与ARMA模型之间的关系,指出两类模型均可转化为ARMA模型,具有一定的相通性;然后实证比较了两类模型的自相关函数对实际数据自相关系数的刻画能力,以及利用基于随机模拟的似然比检验方法,从实证角度比较两类模型对持续期序列的拟合优度,得出在拟合金融市场超高频持续期数据时,SCD模型比ACD模型更具有优势。  相似文献   

13.
超高频数据下金融市场持续期序列模型述评   总被引:1,自引:0,他引:1  
鉴于针对超高频数据统计建模能够有效弥补传统相同时间间隔数据统计建模的不足,而且有助洞悉金融市场微观结构,近年来,对金融市场超高频数据的研究已成为金融计量学一个全新的研究领域。本文总结了近十年来超高频数据下金融市场持续期序列建模及其参数估计方法的发展及主要成果,对这些持续期模型及其参数估计方法进行了比较,并指出现在和未来该研究领域研究所面临的主要课题。  相似文献   

14.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

15.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

16.
Biological Models of Carcinogenesis and Quantitative Cancer Risk Assessment   总被引:1,自引:0,他引:1  
Biologically-based models of carcinogenesis were originally developed to explain certain quanti-tative phenomena associated with carcinogenesis, and to provide a framework within which questions regarding the process could be addressed. Some limitations in the use of these models for quantitative cancer risk assessment are discussed.  相似文献   

17.
Computational models support environmental regulatory activities by providing the regulator an ability to evaluate available knowledge, assess alternative regulations, and provide a framework to assess compliance. But all models face inherent uncertainties because human and natural systems are always more complex and heterogeneous than can be captured in a model. Here, we provide a summary discussion of the activities, findings, and recommendations of the National Research Council's Committee on Regulatory Environmental Models, a committee funded by the U.S. Environmental Protection Agency to provide guidance on the use of computational models in the regulatory process. Modeling is a difficult enterprise even outside the potentially adversarial regulatory environment. The demands grow when the regulatory requirements for accountability, transparency, public accessibility, and technical rigor are added to the challenges. Moreover, models cannot be validated (declared true) but instead should be evaluated with regard to their suitability as tools to address a specific question. The committee concluded that these characteristics make evaluation of a regulatory model more complex than simply comparing measurement data with model results. The evaluation also must balance the need for a model to be accurate with the need for a model to be reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs requires model evaluation to be applied over the "life cycle" of a regulatory model with an approach that includes different forms of peer review, uncertainty analysis, and extrapolation methods than those for nonregulatory models.  相似文献   

18.
Massive efforts are underway to clean up hazardous and radioactive waste sites located throughout the United States. To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate, and effects of hazardous chemicals and radioactive materials found at these sites. Although the U.S. Environmental Protection Agency (EPA), the U.S. Department of Energy (DOE), and the U.S. Nuclear Regulatory Commission (NRC)have provided preliminary guidance to promote the use of computer models for remediation purposes, no agency has produced directed guidance on models that must be used in these efforts. As a result, model selection is currently done on an ad hoc basis. This is administratively ineffective and costly, and can also result in technically inconsistent decision-making. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE, and NRC was initiated. The purpose of this project was to: (1)identify models being used for hazardous and radioactive waste site assessment purposes; and (2)describe and classify these models. This report presents the results of this study. A mail survey was conducted to identify models in use. The survey was sent to 550 persons engaged in the cleanup of hazardous and radioactive waste sites; 87 individuals responded. They represented organizations including federal agencies, national laboratories, and contractor organizations. The respondents identified 127 computer models that were being used to help support cleanup decision-making. There were a few models that appeared to be used across a large number of sites (e.g., RESRAD). In contrast, the survey results also suggested that most sites were using models which were not reported in use elsewhere. Information is presented on the types of models being used and the characteristics of the models in use. Also shown is a list of models available, but not identified in the survey itself.  相似文献   

19.
Multistage models are frequently applied in carcinogenic risk assessment. In their simplest form, these models relate the probability of tumor presence to some measure of dose. These models are then used to project the excess risk of tumor occurrence at doses frequently well below the lowest experimental dose. Upper confidence limits on the excess risk associated with exposures at these doses are then determined. A likelihood-based method is commonly used to determine these limits. We compare this method to two computationally intensive "bootstrap" methods for determining the 95% upper confidence limit on extra risk. The coverage probabilities and bias of likelihood-based and bootstrap estimates are examined in a simulation study of carcinogenicity experiments. The coverage probabilities of the nonparametric bootstrap method fell below 95% more frequently and by wider margins than the better-performing parametric bootstrap and likelihood-based methods. The relative bias of all estimators are seen to be affected by the amount of curvature in the true underlying dose-response function. In general, the likelihood-based method has the best coverage probability properties while the parametric bootstrap is less biased and less variable than the likelihood-based method. Ultimately, neither method is entirely satisfactory for highly curved dose-response patterns.  相似文献   

20.
Identification of dynamic nonlinear panel data models is an important and delicate problem in econometrics. In this paper we provide insights that shed light on the identification of parameters of some commonly used models. Using these insights, we are able to show through simple calculations that point identification often fails in these models. On the other hand, these calculations also suggest that the model restricts the parameter to lie in a region that is very small in many cases, and the failure of point identification may, therefore, be of little practical importance in those cases. Although the emphasis is on identification, our techniques are constructive in that they can easily form the basis for consistent estimates of the identified sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号