共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
中国银行间债券市场国债回购利率随机行为的实证研究 总被引:2,自引:0,他引:2
遵循CKLS的思路,采用广义距方法,选用我国国债月度回购利率数据估计和比较了不同的短期无风险利率的连续时间模型,结果显示,根据所研究的数据样本,允许利率的波动性依赖于利率水平的模型可以更好地描述短期利率的动态变化,对于不同利率模型的研究对利率风险的套期保值也具有重要的启示. 相似文献
3.
在常规时间变换研究中所采用的从属概念是建立在独立条件上的,但是价格和成交量之间却是相关的.为了能够将成交量作为价格的随机时间,推广了从属概念,提出了相关从属的定义.相关从属扩大了时间变换研究的范围.继而讨论了相关从属下过程的扩散性质,研究了经济时间上的资产定价问题,结果类似于资本资产定价模型和套利定价模型.最后,利用零... 相似文献
4.
5.
6.
7.
8.
9.
消费者对品牌延伸的评价:运用残差中心化方法检验Aaker和Keller模型 总被引:7,自引:0,他引:7
本文运用残差中心化方法对笔者和桑德斯1995年一项研究中的数据重新进行了分析,意在消除原研究中由于变量间的共线性对回归模型的影响,并检验Aaker和Keller品牌延伸模型的有效性。结果发现,消费者对品牌延伸的态度主要受A&K模型中主变量的影响,较少受模型中交互项的影响。文章对研究结果进行了初步的讨论。 相似文献
10.
基于小波域隐马尔可夫模型的时间序列分析-平滑、插值和预测 总被引:3,自引:0,他引:3
提出一种基于小波域隐马尔可夫模型的时间序列分析方法.首先介绍了离散小波变换;并针对小波系数进行统计建模,分别讨论了单个小波系数的混合高斯模型、不同尺度小波系数之间的隐马尔可夫树结构、模型训练及似然计算等问题;其次,提出了关于时间序列插值、平滑和预测的统一数学模型,并运用极大后验概率估计和贝叶斯原理,将小波域隐马尔可夫模型作为先验知识给出了一种分析时间序列的新方法;然后,详细推导了时间序列重建问题的Euler-Lagrange方程及对数似然的导数计算,将时间序列的插值、平滑和预测归结为一个简单线性方程的求解;最后通过期望极大化(EM)算法和共扼梯度算法进行交替迭代来计算小波域隐马尔可夫模型参数和重建时间序列.实验结果表明该方法在经济领域时间序列分析中的有效性. 相似文献
11.
12.
SCD模型与ACD模型比较研究 总被引:1,自引:0,他引:1
针对近几年在研究金融市场超高频序列时出现的ACD模型和SCD模型,先从理论上探讨了ACD模型、SCD模型与ARMA模型之间的关系,指出两类模型均可转化为ARMA模型,具有一定的相通性;然后实证比较了两类模型的自相关函数对实际数据自相关系数的刻画能力,以及利用基于随机模拟的似然比检验方法,从实证角度比较两类模型对持续期序列的拟合优度,得出在拟合金融市场超高频持续期数据时,SCD模型比ACD模型更具有优势。 相似文献
13.
超高频数据下金融市场持续期序列模型述评 总被引:1,自引:0,他引:1
鉴于针对超高频数据统计建模能够有效弥补传统相同时间间隔数据统计建模的不足,而且有助洞悉金融市场微观结构,近年来,对金融市场超高频数据的研究已成为金融计量学一个全新的研究领域。本文总结了近十年来超高频数据下金融市场持续期序列建模及其参数估计方法的发展及主要成果,对这些持续期模型及其参数估计方法进行了比较,并指出现在和未来该研究领域研究所面临的主要课题。 相似文献
14.
Structure and Parameterization of Pharmacokinetic Models: Their Impact on Model Predictions 总被引:4,自引:0,他引:4
Tracey J. Woodruff Frédéric Y. Bois David Auslander Robert C. Spear 《Risk analysis》1992,12(2):189-201
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters. 相似文献
15.
Topological Performance Measures as Surrogates for Physical Flow Models for Risk and Vulnerability Analysis for Electric Power Systems 下载免费PDF全文
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed. 相似文献
16.
Suresh H. Moolgavkar 《Risk analysis》1994,14(6):879-882
Biologically-based models of carcinogenesis were originally developed to explain certain quanti-tative phenomena associated with carcinogenesis, and to provide a framework within which questions regarding the process could be addressed. Some limitations in the use of these models for quantitative cancer risk assessment are discussed. 相似文献
17.
Computational models support environmental regulatory activities by providing the regulator an ability to evaluate available knowledge, assess alternative regulations, and provide a framework to assess compliance. But all models face inherent uncertainties because human and natural systems are always more complex and heterogeneous than can be captured in a model. Here, we provide a summary discussion of the activities, findings, and recommendations of the National Research Council's Committee on Regulatory Environmental Models, a committee funded by the U.S. Environmental Protection Agency to provide guidance on the use of computational models in the regulatory process. Modeling is a difficult enterprise even outside the potentially adversarial regulatory environment. The demands grow when the regulatory requirements for accountability, transparency, public accessibility, and technical rigor are added to the challenges. Moreover, models cannot be validated (declared true) but instead should be evaluated with regard to their suitability as tools to address a specific question. The committee concluded that these characteristics make evaluation of a regulatory model more complex than simply comparing measurement data with model results. The evaluation also must balance the need for a model to be accurate with the need for a model to be reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs requires model evaluation to be applied over the "life cycle" of a regulatory model with an approach that includes different forms of peer review, uncertainty analysis, and extrapolation methods than those for nonregulatory models. 相似文献
18.
Massive efforts are underway to clean up hazardous and radioactive waste sites located throughout the United States. To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate, and effects of hazardous chemicals and radioactive materials found at these sites. Although the U.S. Environmental Protection Agency (EPA), the U.S. Department of Energy (DOE), and the U.S. Nuclear Regulatory Commission (NRC)have provided preliminary guidance to promote the use of computer models for remediation purposes, no agency has produced directed guidance on models that must be used in these efforts. As a result, model selection is currently done on an ad hoc basis. This is administratively ineffective and costly, and can also result in technically inconsistent decision-making. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE, and NRC was initiated. The purpose of this project was to: (1)identify models being used for hazardous and radioactive waste site assessment purposes; and (2)describe and classify these models. This report presents the results of this study. A mail survey was conducted to identify models in use. The survey was sent to 550 persons engaged in the cleanup of hazardous and radioactive waste sites; 87 individuals responded. They represented organizations including federal agencies, national laboratories, and contractor organizations. The respondents identified 127 computer models that were being used to help support cleanup decision-making. There were a few models that appeared to be used across a large number of sites (e.g., RESRAD). In contrast, the survey results also suggested that most sites were using models which were not reported in use elsewhere. Information is presented on the types of models being used and the characteristics of the models in use. Also shown is a list of models available, but not identified in the survey itself. 相似文献
19.
Multistage models are frequently applied in carcinogenic risk assessment. In their simplest form, these models relate the probability of tumor presence to some measure of dose. These models are then used to project the excess risk of tumor occurrence at doses frequently well below the lowest experimental dose. Upper confidence limits on the excess risk associated with exposures at these doses are then determined. A likelihood-based method is commonly used to determine these limits. We compare this method to two computationally intensive "bootstrap" methods for determining the 95% upper confidence limit on extra risk. The coverage probabilities and bias of likelihood-based and bootstrap estimates are examined in a simulation study of carcinogenicity experiments. The coverage probabilities of the nonparametric bootstrap method fell below 95% more frequently and by wider margins than the better-performing parametric bootstrap and likelihood-based methods. The relative bias of all estimators are seen to be affected by the amount of curvature in the true underlying dose-response function. In general, the likelihood-based method has the best coverage probability properties while the parametric bootstrap is less biased and less variable than the likelihood-based method. Ultimately, neither method is entirely satisfactory for highly curved dose-response patterns. 相似文献
20.
Identification of dynamic nonlinear panel data models is an important and delicate problem in econometrics. In this paper we provide insights that shed light on the identification of parameters of some commonly used models. Using these insights, we are able to show through simple calculations that point identification often fails in these models. On the other hand, these calculations also suggest that the model restricts the parameter to lie in a region that is very small in many cases, and the failure of point identification may, therefore, be of little practical importance in those cases. Although the emphasis is on identification, our techniques are constructive in that they can easily form the basis for consistent estimates of the identified sets. 相似文献