首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Increased demands on more efficient information handling can be met with cheap computers and electronic communication. However, information structure and information coding is often relatively underdeveloped, particularly for more complex projects. The proposed system for structuring and coding, with main codes consistently connected to three concepts, activities, resources and components, can give the logic flexibility and power which is needed for a more efficient information handling in 'learning' organizations, striving at reduced lead limes, more efficient experience feedback, decentralization and competence development.  相似文献   

2.
《生产规划与管理》2013,24(3):287-290
  相似文献   

3.
4.
杨宏林  陈收  袁际军 《管理学报》2007,4(5):618-621
对上海证券交易所综合股价指数(SSECI)收益率(5-min数据集)多标度条件下的相关性研究发现,在多重分形消除趋势波动法MF-DFA(适用于非稳定序列多点间相关性分析)的度量框架下,收益率序列G和G在101×5~105×5min的4个数量级标度条件下具有非唯一的幂相关指数,不同波动幅度表现出不同的长程相关性特征;收益率序列G的幂相关指数谱宽度大于序列|G|的指数谱宽度,平均谱指数小于|G|的平均谱指数,序列|G|表现出较G更强的正向持久性。  相似文献   

5.
In this article we introduce efficient Wald tests for testing the null hypothesis of the unit root against the alternative of the fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson Lagrange multiplier tests. Our results contrast with the tests for fractional unit roots, introduced by Dolado, Gonzalo, and Mayoral, which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient two‐step test that avoids the estimation of a nonlinear regression model. In addition, the first‐order asymptotic properties of the proposed tests are not affected by the preestimation of short or long memory parameters.  相似文献   

6.
在剖析了学习智障的构成要素(结构智障、管理智障、文化智障、心智智障、执行智障)基础上,分析了学习智障对知识转化的削弱作用,构建了学习智障对知识转化影响的结构模型,提出了相关研究假设;运用实证研究方法,对259份有效问卷的综合分析结果表明:组织学习智障的5个构成要素对知识转化过程的削弱作用总体比较明显,但管理智障对社会化过程、结构智障和执行智障对外显化过程以及文化智障对组合化过程的制约作用不明显,但组织整体学习智障对组织知识转化过程具有显著的负向作用。  相似文献   

7.
灰关联度决策模型是在假设属性之间彼此相互独立的基础上构建的,但是在很多实际问题中属性之间往往存在一定的交互作用,从而导致灰关联度决策模型失效。针对这一问题,引入模糊积分理论,构建了灰模糊积分关联度决策模型。为求解该模型,定义了基于属性权重和属性间交互度的默比乌斯变换系数,来计算2可加模糊测度,其中属性权重通过序关系分析法和施密特正交马田系统共同确定,属性间的交互关系和交互度由专家确定。以廉租房保障家庭经济状况评估为例,对灰模糊积分关联度决策模型和灰关联度决策模型进行比较验证,验证结果表明灰模糊积分关联度决策模型的决策结果更加科学合理,有较好的应用价值。  相似文献   

8.
原油是具有战略和金融双重属性的大宗商品,原油的金融属性导致其价格波动必然波及商品期货市场。本文研究国际原油价格波动对中国商品期货的影响。通过选取2001年1月-2017年5月沪铜、沪胶和大豆等三类代表性商品期货的收益率日数据,通过使用相关性结构断点和VaR分位数回归模型研究原油价格波动对三类商品期货的风险传导。结果发现:我国商品期货与国际原油价格之间的相关性都呈现某种"周期性",其周期大约为七年;从收益率视角来看,2008-2014年的高相关性期间,三类商品期货与国际原油的收益率存在明显的正向联动关系;从风险传导来看,三类商品期货与国际原油在不同风险状态下的传导效应有着明显的区别和规律,尤其是在高相关性期间,国际原油对三类商品期货的风险传导呈现出某种由高到低的"阶梯性"变化。  相似文献   

9.
基于全最小二乘拟蒙特卡罗方法的可转债定价研究   总被引:2,自引:0,他引:2  
基于传统的可转债最小二乘蒙特卡罗模拟定价方法,通过使用随机Faure序列和方差减小技术,有效地降低模型估计结果的误差,使用考虑解释变量和被解释变量误差的全最小二乘回归方法代替普通的最小二乘回归方法,提出可转债的全最小二乘拟蒙特卡罗定价方法,并给出该定价方法的具体算法步骤.以2002年10月16日发行的燕京可转换债券为例进行实证分析,从可转债的理论价值、计算标准差以及模型的运行时间等几个方面与传统的蒙特卡罗方法进行比较.研究结果表明,使用全最小二乘拟蒙特卡罗方法进行计算得到的结果更为合理,且估计误差和计算时间都更少,从而验证了该方法在可转债定价应用上的有效性.  相似文献   

10.
Incident data about disruptions to the electric power grid provide useful information that can be used as inputs into risk management policies in the energy sector for disruptions from a variety of origins, including terrorist attacks. This article uses data from the Disturbance Analysis Working Group (DAWG) database, which is maintained by the North American Electric Reliability Council (NERC), to look at incidents over time in the United States and Canada for the period 1990-2004. Negative binomial regression, logistic regression, and weighted least squares regression are used to gain a better understanding of how these disturbances varied over time and by season during this period, and to analyze how characteristics such as number of customers lost and outage duration are related to different characteristics of the outages. The results of the models can be used as inputs to construct various scenarios to estimate potential outcomes of electric power outages, encompassing the risks, consequences, and costs of such outages.  相似文献   

11.
发电上市公司经营效绩与高管薪酬的相关性分析模型   总被引:3,自引:0,他引:3  
我国发电行业正处于从垄断向竞争、计划向市场转变的阶段,发电企业高管人员的报酬激励日益受到重视。本文以在国内证券市场上市的38家发电企业作为样本,实证性研究其2001—2003年高管薪酬与公司经营效绩之间的相关性。首先作全部样本和剔除异常点后样本的所有年度和分年度的相关分析和偏相关分析,再把定量指标做定性化处理,采用列联表独立必检验来分析绩效与薪酬之间的关系,结果表明二者之间不具有相关性。文章最后根据发电企业实际情况对结果进行了解释,并对发电企业高管薪酬制度的设计提出了有关建议。  相似文献   

12.
基于Heath-Jarrow-Morton( HJM)模型框架,将远期利率波动率设定为服从广义均值回归平方根过程的随机变量,以刻画隐性随机波动因子的动态特性,并通过将漂移项限制条件推广至波动因子之间,以及利率波动率的变化与利率变动之间存在相关性情况,建立了广义的多因子HJM模型.在该模型框架下,基于一类特定波动率设定形...  相似文献   

13.
For stationary time series models with serial correlation, we consider generalized method of moments (GMM) estimators that use heteroskedasticity and autocorrelation consistent (HAC) positive definite weight matrices and generalized empirical likelihood (GEL) estimators based on smoothed moment conditions. Following the analysis of Newey and Smith (2004) for independent observations, we derive second order asymptotic biases of these estimators. The inspection of bias expressions reveals that the use of smoothed GEL, in contrast to GMM, removes the bias component associated with the correlation between the moment function and its derivative, while the bias component associated with third moments depends on the employed kernel function. We also analyze the case of no serial correlation, and find that the seemingly unnecessary smoothing and HAC estimation can reduce the bias for some of the estimators.  相似文献   

14.
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities.  相似文献   

15.
Tests based on the quantile regression process can be formulated like the classical Kolmogorov–Smirnov and Cramér–von–Mises tests of goodness–of–fit employing the theory of Bessel processes as in Kiefer (1959). However, it is frequently desirable to formulate hypotheses involving unknown nuisance parameters, thereby jeopardizing the distribution free character of these tests. We characterize this situation as “the Durbin problem” since it was posed in Durbin (1973), for parametric empirical processes. In this paper we consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving the quantile regression process. In particular, we suggest new tests of the location shift and location–scale shift models that underlie much of classical econometric inference. The methods are illustrated with a reanalysis of data on unemployment durations from the Pennsylvania Reemployment Bonus Experiments. The Pennsylvania experiments, conducted in 1988–89, were designed to test the efficacy of cash bonuses paid for early reemployment in shortening the duration of insured unemployment spells.  相似文献   

16.
基于非参数回归提出了同时适用于横截面和时间序列数据的遗漏变量检验统计量.与现有文献相比,该统计量不仅避免了模型设定偏误问题,而且具有更高的局部检验功效,能够识别出速度更快的收敛到原假设的局部备择假设.该文选择单一带宽估计条件联合期望和条件边际期望,允许二者的非参数估计误差共同决定统计量的渐近分布,不仅改善了统计量的有限样本性质,而且避免了选择多个带宽和计算多个偏差项产生的繁杂工作.蒙特卡洛模拟结果表明该统计量具有良好的有限样本性质以及比Ait-Sahalia等更高的检验功效.实证分析采用该统计量捕获了F统计量无法识别的产出缺口与通胀之间关系,验证了非线性“产出一通胀”型菲利普斯曲线在中国的适用性.  相似文献   

17.
Counterfactual distributions are important ingredients for policy analysis and decomposition analysis in empirical economics. In this article, we develop modeling and inference tools for counterfactual distributions based on regression methods. The counterfactual scenarios that we consider consist of ceteris paribus changes in either the distribution of covariates related to the outcome of interest or the conditional distribution of the outcome given covariates. For either of these scenarios, we derive joint functional central limit theorems and bootstrap validity results for regression‐based estimators of the status quo and counterfactual outcome distributions. These results allow us to construct simultaneous confidence sets for function‐valued effects of the counterfactual changes, including the effects on the entire distribution and quantile functions of the outcome as well as on related functionals. These confidence sets can be used to test functional hypotheses such as no‐effect, positive effect, or stochastic dominance. Our theory applies to general counterfactual changes and covers the main regression methods including classical, quantile, duration, and distribution regressions. We illustrate the results with an empirical application to wage decompositions using data for the United States. As a part of developing the main results, we introduce distribution regression as a comprehensive and flexible tool for modeling and estimating the entire conditional distribution. We show that distribution regression encompasses the Cox duration regression and represents a useful alternative to quantile regression. We establish functional central limit theorems and bootstrap validity results for the empirical distribution regression process and various related functionals.  相似文献   

18.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

19.
Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.  相似文献   

20.
A sequence of linear, monotonic, and nonmonotonic test problems is used to illustrate sampling-based uncertainty and sensitivity analysis procedures. Uncertainty results obtained with replicated random and Latin hypercube samples are compared, with the Latin hypercube samples tending to produce more stable results than the random samples. Sensitivity results obtained with the following procedures and/or measures are illustrated and compared: correlation coefficients (CCs), rank correlation coefficients (RCCs), common means (CMNs), common locations (CLs), common medians (CMDs), statistical independence (SI), standardized regression coefficients (SRCs), partial correlation coefficients (PCCs), standardized rank regression coefficients (SRRCs), partial rank correlation coefficients (PRCCs), stepwise regression analysis with raw and rank-transformed data, and examination of scatter plots. The effectiveness of a given procedure and/or measure depends on the characteristics of the individual test problems, with (1) linear measures (i.e., CCs, PCCs, SRCs) performing well on the linear test problems, (2) measures based on rank transforms (i.e., RCCs, PRCCs, SRRCs) performing well on the monotonic test problems, and (3) measures predicated on searches for nonrandom patterns (i.e., CMNs, CLs, CMDs, SI) performing well on the nonmonotonic test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号