首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Determining safety stocks in multistage manufacturing systems with serial or divergent structures, where end-item demands are allowed to be correlated both between products as well as in time, is my focus. I show that these types of correlation have contrary effects on the distribution of safety stocks over the manufacturing stages and that neglecting the correlation of demand can lead to significant deviation from the optimal buffer policy. Using base-stock control and assuming total reliability for internal supplies, I present a procedure for integrated multilevel safety stock optimization that can be applied to arbitrary serial and divergent systems even when demand is jointly cross-product and cross-time correlated. As I demonstrate in an example for autocorrelated demands of a moving average type, there are specific solution properties that drastically reduce the computational effort for safety stock planning. Safety stocks determined in that way can be used as an appropriate protection against demand uncertainties in material requirements planning systems.  相似文献   

2.
以期货合约的每一交易日的对数涨跌率来反映市场风险,借助VaR风险价值法,运用加权核估计技术(WKDE)和指数加权滑动模型(EWMA),建立了基于期货组合中持有头寸不同且可以进行风险对冲的期货组合市场风险非线性叠加评价模型,解决了同种商品、不同月份期货组合每一交易日最大损失的确定问题,并通过实证研究验证了模型的实用性.该模型的特点一是借助WKDE法预测组合中单个合约每一交易日涨跌率最大日亏损值,充分体现了期货合约涨跌率的实际走势,使VaR估计更加精确.二是通过动态迁移相关系数矩阵的计算保证了模型的精确性.采用EWMA模型预测动态变化的方差-协方差矩阵,从实证的角度得到更精准的动态迁移相关系数矩阵.三是考虑了组合中多头和空头不同头寸之间的风险对冲,避免了实际中期货组合风险的线性相加而造成放大风险或减少风险的不准确性,从而能较好地保证了模型的预测精度及准确性.四是通过基于风险非线性叠加建立的期货组合风险评价模型解决了SPAN系统中期货组合风险的线性叠加问题,从而得到更合理的组合风险预测值.  相似文献   

3.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

4.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

5.
Two assumptions used in risk assessment are investigated: (1) the assumption of fraction of lifetime dose rate assumes that the risk from a fractional lifetime exposure at a given dose rate is equal to the risk from full lifetime exposure at that same fraction of the given dose rate; (2) the assumption of fraction of lifetime risk assumes that the risk from a fractional lifetime exposure at a given dose rate is equal to that same fraction of the risk from full lifetime exposure at the same dose rate. These two assumptions are equivalent when risk is a linear function of dose. Thus both can be thought of as generalizations of the assumption that cancer risk is proportional to the total accumulated lifetime dose (or average daily dose), which is often made to assess the risk from short-term exposures. In this paper, the age-specific cumulative hazard functions are derived using the two-stage model developed by Moolgavkar, Venzon, and Knudson for situations when the exposure occurs during a single period or a single instant. The two assumptions described above are examined for three types of carcinogens, initiator, completer, and promoter, in the context of the model. For initiator and completer, these two assumptions are equivalent in the low-dose region; for a promoter, using the fraction of lifetime risk assumption is generally more conservative than that of the fraction of lifetime dose rate assumption. Tables are constructed to show that the use of either the fraction of lifetime dose rate assumption or the fraction lifetime risk assumption can both underestimate and overestimate the true risk for the three types of carcinogens.  相似文献   

6.
国际组合投资涉及多币种汇率风险,分别使用双边货币期货进行套保要承担较高套保成本。参考美元指数期货的实践,本文提出基于人民币指数期货的综合套保策略。实证结果表明,无论对于单个货币资产还是分散化投资的国际股指、债指组合,引入人民币指数期货能够显著降低收益率波动,提高抵御汇率波动的能力,同时拓展收益空间,是有效的汇率风险综合套保工具;人民币指数期货套保效率显著优于货币期货篮子,在发达国家股指市场表现更加突出。采用基于指数加权移动平均模型(EWMA)的动态套保策略,使得人民币指数期货收益对股指或债指市场波动敏感度降低,在市场极端状况时仍能保持相对中性。  相似文献   

7.
A hybrid ARIMA and support vector machines model in stock price forecasting   总被引:4,自引:0,他引:4  
Ping-Feng Pai  Chih-Sheng Lin 《Omega》2005,33(6):11489-505
Traditionally, the autoregressive integrated moving average (ARIMA) model has been one of the most widely used linear models in time series forecasting. However, the ARIMA model cannot easily capture the nonlinear patterns. Support vector machines (SVMs), a novel neural network technique, have been successfully applied in solving nonlinear regression estimation problems. Therefore, this investigation proposes a hybrid methodology that exploits the unique strength of the ARIMA model and the SVMs model in forecasting stock prices problems. Real data sets of stock prices were used to examine the forecasting accuracy of the proposed model. The results of computational tests are very promising.  相似文献   

8.
本文在对数周期幂率(LPPL)模型基础上,分别构建了“滚动窗口”以及“固定起点并移动终点”两种泡沫临界点动态置信区间构建方法,并以中国股市沪深300指数在2007年和2015年发生的两次大牛市股市崩盘为研究对象,采用两种新的方法进行样本外预测,计算出泡沫破裂的临界时点以及动态置信区间。研究结果表明,随着时间的不断推移,泡沫破裂临界时点的置信区间基本上能稳定覆盖实际发生泡沫破裂的时点。相比单纯利用LPPL模型预测临界时点方法,置信区间法能更好地克服预测临界时点随机性的情况,并能很好显示股市泡沫临界区间的变化轨迹,为投资者风险管理提供参考。  相似文献   

9.
基于非参数估计框架的期望效用最大化最优投资组合   总被引:1,自引:0,他引:1  
本文基于期望效用最大化和非参数估计框架研究了最优投资组合选择问题。和以往大多文献假定资产收益率服从某些特定分布不同资产收益率的分布类型无需作任何假设。首先在一般效用函数下,利用组合收益率密度函数的非参数核估计给出了期望效用的基本非参数估计公式,并建立了期望效用最大化投资组合选择问题的基本框架。然后,在投资者具有幂效用函数的假定下,给出了期望效用具体的非参数计算公式,并给出了求解最大期望效用的数值算法。最后,利用中国证券交易所11支股票日收益率的真实数据给出了一个数值算例。本文提出的非参数估计框架具有一般性,还可以进一步用来研究各种现实条件下(如各种现实不等式约束和具有交易成本)的投资组合管理问题。  相似文献   

10.
We deal with the question whether estimating heterogeneous multiplicative sales response models without carry over effects by either ordinary least squares or Gibbs sampling makes a difference if resources (like advertising budgets, sales budgets, sales force sizes, sales calls) have to be allocated to sales units (like sales districts, customer groups, individual costumers or prospects) in a profit maximizing way and only short time series are available. To this end we generate artificial series on sales and allocations by stochastic simulation. These series are used to estimate multiplicative models whose coefficients are either specific to individual sales units or follow a hierarchical Bayesian framework. Ordinary least squares and Gibbs sampling serve as appropriate estimation methods. Performance of the two estimation methods is measured by recovery of optimal profits which are computed on the basis of the known true parameter values. We start to determine optimal allocations based on the plug-in method which uses average coefficients to determine expected profits. Gibbs sampling always leads to profits nearer to the true optima. This advantage of Gibbs sampling is especially pronounced for combinations with high average elasticity, high variation of elasticity and high number of sales units. On the other hand, differences between Gibbs sampling and OLS become smaller the more observations are available. Optimization with expected profits taking parameter uncertainty (i.e., the distribution of parameters) into account leads to higher profits than the plug-in method, but relative increases turn out to be rather small.  相似文献   

11.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

12.
Recent advances in risk assessment have led to the development of joint dose-response models to describe prenatal death and fetal malformation rates in developmental toxicity experiments. These models can be used to estimate the effective dose corresponding to a 5% excess risk for both these toxicological endpoints, as well as for overall toxicity. In this article, we develop optimal experimental designs for the estimation of the effective dose for developmental toxicity using joint Weibull dose-response models for prenatal death and fetal malformation. Based on an extended series of developmental studies, near-optimal designs for prenatal death, malformation, and overall toxicity were found to involve three dose groups: an unexposed control group, a high dose equal to the maximum tolerated dose, and a low dose above or comparable to the effective dose. The effect on the optimal designs of changing the number of implants and the degree of intra-litter correlation is also investigated. Although the optimal design has only three dose groups in most cases, practical considerations involving model lack of fit and estimation of the shape of the dose-response curve suggest that, in practice, suboptimal designs with more than three doses will often be preferred.  相似文献   

13.
The present study performs portfolio analysis using a multi-index model in the diagonal form. In a mean-variance framework, an alternative solution to a portfolio optimization problem is derived, providing analytical and computational improvements. This leads to a proof of a crucial functional property of cutoff rates of security performance in the solution, thus providing formal justification for a nonranking procedure of optimal portfolio selection. The robustness of the above functional property, and hence the nonranking procedure, is demonstrated numerically when the underlying normality assumption of security returns is replaced by a more general assumption of stable Paretian distributions.  相似文献   

14.
本研究基于对数周期幂律模型LPPL(Log Periodic Power Law Model),针对金融时间序列将一维价格波动翻译成反映市场泡沫微观结构的多维变量。通过对多维变量的动态监测,把握市场中泡沫的演变并预测泡沫破裂的临界点,从而有效降低或防范金融资产泡沫破裂所导致的风险。为检验LPPL模型在中国金融市场中的适用性,本文分别使用上证综指、四个期货连续合约以及两支个股检验模型效果。实证结果表明当金融资产价格序列呈现超指数加速震荡上升或下降时,该模型能获得稳定的估计效果,有效预测泡沫破裂临界时点。  相似文献   

15.
The basic concepts and application of spectral analysis are explained. Stationary time series and autocorrelation are first defined. Autocorrelation is related to the familiar concepts of variance and covariance. The use of autocorrelation analysis is explained in estimating the interdependent relationship of a time series over discrete time lags. In order to measure the behavior of the time series using autocorrelation, it would be necessary to examine a very large number of autocorrelation lags. Alternatively, the technique of Fourier analysis can be used to transform the autocorrelation function of the time series into a continuous function, termed a spectrum. The spectrum has a one to one correspondence to the autocorrelation for the time series and has the advantage of representing all possible autocorrelations over the discrete time lags. The spectrum can then be examined as a measure of the behavior of the time series. Spectral analysis indicates the reliability of the analysis of autocorrelated variables when familiar statistical techniques such as sample means and variances are used. The application of spectral analysis to management science problems in three general areas is illustrated: (1) inventory demand, (2) transportation simulation, and (3) stock market price behavior. Spectral analysis was used to detect cycles and trends in the data. Analyses were focused on the spectrum which provides a measure of the relative contribution of cycles in a band of frequencies to the total variance of the data.  相似文献   

16.
This paper presents a simple two-step nonparametric estimator for a triangular simultaneous equation model. Our approach employs series approximations that exploit the additive structure of the model. The first step comprises the nonparametric estimation of the reduced form and the corresponding residuals. The second step is the estimation of the primary equation via nonparametric regression with the reduced form residuals included as a regressor. We derive consistency and asymptotic normality results for our estimator, including optimal convergence rates. Finally we present an empirical example, based on the relationship between the hourly wage rate and annual hours worked, which illustrates the utility of our approach.  相似文献   

17.
对协方差矩阵高频估计量和预测模型的选择,共同影响协方差的预测效果,从而影响波动择时投资组合策略的绩效。资产维数很高时,协方差矩阵高频估计量的构建会因非同步交易而丢弃大量数据,降低信息利用效率。鉴于此,将可以充分利用资产日内价格信息的KEM估计量用于估计中国股市资产的高维协方差矩阵,并与两种常用协方差矩阵估计量进行比较。进一步地,将三种估计量分别用于多元异质自回归模型、指数加权移动平均模型以及短、中、长期移动平均模型进行样本外预测,并比较在三种基于风险的投资组合策略下的经济效益。采用上证50指数中20只不同流动性成份股逐笔高频数据的实证研究发现:(1)无论是在市场平稳时期还是市场剧烈震荡期,长期移动平均模型都是高维协方差估计量预测建模的最优选择,在应用于各种波动择时策略时都可以实现最低成本和最高收益。(2)在市场平稳时期,KEM估计量是高维协方差估计的最优选择,应用于各种波动择时策略时基本都可以实现最低成本和最高收益;在市场剧烈震荡期,使用KEM估计量进行波动择时仍然可以在成本方面保持优势,但在收益上并不占优。(3)无论是在市场平稳时期还是市场剧烈震荡期,最低的成本都是在采用等风险贡献投资组合时实现的,而最高的收益则都是在采用最小方差投资组合时实现的。研究不仅首次检验了KEM估计量在常用波动择时策略中的适用性,而且首次实证了实现最为简单的长期移动平均模型在高维协方差矩阵预测中的优越性,对投资决策和风险管理等实务应用都具有重要意义。  相似文献   

18.
由于噪声的存在使得高频数据的分析过程存在着诸多困难,本文探讨了高频数据情况下的金融资产收益率已实现波动率的估计问题。在离散化的跳跃模型基础上,通过混合泊松分布而非传统的连续扩散模型来描述价格过程,并进一步提出了不同于以往文献研究的噪声假设,即在独立同分布的噪声假设基础上放松约束条件,保持噪声的独立性,但是允许噪声强度随时间变化,以此改善了传统的固定时间间隔取样模式。为了进一步改善估计效果,我们结合了TrTS(Transaction Time Sampling)以及一阶偏误修正的RV(realized variance)估计方式RVAC(1) (first-order AutoCorrelation to RV)。对来自两个交易所不同板块股票的价格数据进行的实证研究结果表明,本文的估计方式虽然对于个别股票价格数据会产生与实际背离潜在真实价格参数,但整体上对于已实现波动率的估计效果是比较稳健的。  相似文献   

19.
An assumption of multivariate normality for a decision model is validated in this paper. Measurements for the independent variables of a bond rating model were taken from a sample of municipal bonds. Three methods for examining both univariate and multivariate normality (including normal probability plots) are described and applied to the bond data. The results imply, after applying normalizing transformations to four of the variables, that the data reasonably approximate multivariate normality, thereby validating a distributional requirement of the discriminant-analysis-based decision model. The methods described in the paper may also be used by others interested in examining multivariate normality assumptions of decision models.  相似文献   

20.
A general discussion of knowledge dependence in risk calculations shows that the assumption of independence underlying standard Monte Carlo simulation in uncertainty analysis is frequently violated. A model is presented for performing Monte Carlo simulation when the variabilities of the component failure probabilities are either negatively or positively coupled. The model is applied to examples in human reliability analysis and the results are compared to the results of Sandia Laboratories as published in the Peer Review Study and to recalculations using more recent methods of uncertainty analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号