首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In acute toxicity testing, organisms are continuously exposed to progressively increasing concentrations of a chemical and deaths of test organisms are recorded at several selected times. The results of the test are traditionally summarized by a dose-response curve, and the time course of effect is usually ignored for lack of a suitable model. A model which integrates the combined effects of dose and exposure duration on response is derived from the biological mechanisms of aquatic toxicity, and a statistically efficient approach for estimating acute toxicity by fitting the proposed model is developed in this paper. The proposed procedure has been computerized as software and a typical data set is used to illustrate the theory and procedure. The new statistical technique is also tested by a data base of a variety of chemical and fish species.  相似文献   

2.
本文对客户资产中最为关键的计算因子——客户预期贡献,提出一种利用最小二乘法进行回归分析,拟合出客户预期贡献的计算函数,并将其运用到客户资产计算公式中,建立客户资产度量模型.本文还以中国建设银行某支行餐饮娱乐业固定资产贷款业务为案例,阐明了该方法的应用,并对计算出的客户资产结果进行了拟合优度检验和显著性检验.  相似文献   

3.
Peng Liu  Zhizhong Li 《Risk analysis》2014,34(9):1706-1719
There is a scarcity of empirical data on human error for human reliability analysis (HRA). This situation can increase the variability and impair the validity of HRA outcomes in risk analysis. In this work, a microworld study was used to investigate the effects of performance shaping factors (PSFs) and their interrelationships and combined effects on the human error probability (HEP). The PSFs involved were task complexity, time availability, experience, and time pressure. The empirical data obtained were compared with predictions by the Standardized Plant Analysis Risk‐Human Reliability Method (SPAR‐H) and data from other sources. The comparison included three aspects: (1) HEP, (2) relative effects of the PSFs, and (3) error types. Results showed that the HEP decreased with experience and time availability levels. The significant relationship between task complexity and the HEP depended on time availability and experience, and time availability affected the HEP through time pressure. The empirical HEPs were higher than the HEPs predicted by SPAR‐H under different PSF combinations, showing the tendency of SPAR‐H to produce relatively optimistic results in our study. The relative effects of two PSFs (i.e., experience/training and stress/stressors) in SPAR‐H agreed to some extent with those in our study. Several error types agreed well with those from operational experience and a database for nuclear power plants (NPPs).  相似文献   

4.
This paper presents an algorithm to obtain near optimal solutions for the Steiner tree problem in graphs. It is based on a Lagrangian relaxation of a multi-commodity flow formulation of the problem. An extension of the subgradient algorithm, the volume algorithm, has been used to obtain lower bounds and to estimate primal solutions. It was possible to solve several difficult instances from the literature to proven optimality without branching. Computational results are reported for problems drawn from the SteinLib library.  相似文献   

5.
Given a population of cardinality q r that contains a positive subset P of cardinality p, we give a trivial two-stage method that has first stage pools each of which contains q r – 2 objects. We assume that errors occur in the first stage. We give an algorithm that uses the results of first stage to generate a set CP of candidate positives with |CP| (r + 1)q. We give the expected value of |CPP|. At most (r + 1)q trivial second stage tests are needed to identify all the positives in CP. We assume that the second stage tests are error free.  相似文献   

6.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

7.
A k-decomposition of a tree is a process in which the tree is recursively partitioned into k edge-disjoint subtrees until each subtree contains only one edge. We investigated the problem how many levels it is sufficient to decompose the edges of a tree. In this paper, we show that any n-edge tree can be 2-decomposed (and 3-decomposed) within at most ⌈1.44 log n⌉ (and ⌈log n⌉ respectively) levels. Extreme trees are given to show that the bounds are asymptotically tight. Based on the result, we designed an improved approximation algorithm for the minimum ultrametric tree.  相似文献   

8.
Finite mixture models, that is, weighted averages of parametric distributions, provide a powerful way to extend parametric families of distributions to fit data sets not adequately fit by a single parametric distribution. First-order finite mixture models have been widely used in the physical, chemical, biological, and social sciences for over 100 years. Using maximum likelihood estimation, we demonstrate how a first-order finite mixture model can represent the large variability in data collected by the U.S. Environmental Protection Agency for the concentration of Radon 222 in drinking water supplied from ground water, even when 28% of the data fall at or below the minimum reporting level. Extending the use of maximum likelihood, we also illustrate how a second-order finite mixture model can separate and represent both the variability and the uncertainty in the data set.  相似文献   

9.
本文考虑可转债券的违约风险,研究如何用违约风险下的三叉树模型对可转换债券进行定价。首先本文使用Black-Scholes公式测算企业在单位时间内的违约概率。其次,在计算可转债的债券价值时,将相似经营业绩和同等风险的企业债券收益率作为贴现率,计算现金流的现值,以反映相应的违约风险;在计算可转债看涨期权价值时,本文在三叉树模型中引入违约概率,重新计算调整后股票上涨、下跌的幅度和概率,得到基于违约风险的三叉树定价模型;最后对中国市场中实际的可转债——新钢转债进行了定价的计算,并对结果进行了探讨。  相似文献   

10.
跟踪误差下积极资产组合投资的风险约束机制   总被引:1,自引:1,他引:1  
TEV(Tracking Error Volatility)优化是积极资产管理的流行方法,但因其存在的固有缺陷,会导致基金管理人员的行为使得投资者的资产承受更大的风险,进而引发委托代理问题.Jorion(2003)认为利用固定TEV(Con-stan-TEV)优化可以对TEV优化加以改进.本文通过对TEV优化和固定TEV优化的研究,发现Jorion(2003)的判断不完全正确,并基于成本、效率、基准组合的作用、风险偏好等方面,进行深入地分析.而且,设计了一种更为有效的风险约束机制.  相似文献   

11.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

12.
本文对CreditRisk+模型采用Poisson分布近似债务人违约事件分布这一关键步骤进行系统研究。首先分析了Poisson分布在CreditRisk+模型中的作用,并从理论上证明了采用Poisson分布作为债务人违约事件分布的近似,会导致CreditRisk+模型计算出来的经济资本高估贷款组合的实际风险水平;然后以债务人违约事件服从两点分布并采用蒙特卡罗模拟计算出来的经济资本为参照值,对债务人违约概率的大小与这一近似所引起的经济资本计量误差率进行了敏感性试验,发现为将这一近似所引起的误差率控制在10%的范围内,债务人违约概率的取值不应超过0.2。  相似文献   

13.
Adele Bergin 《LABOUR》2015,29(2):194-223
Self‐reported tenure is often used to determine job changes. We show there are substantial inconsistencies in these responses; consequently, we risk misclassifying job changes as stays and vice versa. An estimator from Hausman et al. is applied to a job change model for Ireland, and we find that ignoring misclassification may substantially underestimate the true number of changes and lead to diminished covariate effects. The main contribution of the paper is to control for misclassification when estimating the wage effects of job mobility. A two‐step approach is adopted. We find ignoring misclassification leads to a significant downwards bias in the wage impact, and we provide an estimate that corrects for measurement error.  相似文献   

14.
The appearance of measurement error in exposure and risk factor data potentially affects any inferences regarding variability and uncertainty because the distribution representing the observed data set deviates from the distribution that represents an error-free data set. A methodology for improving the characterization of variability and uncertainty with known measurement errors in data is demonstrated in this article based on an observed data set, known measurement error, and a measurement-error model. A practical method for constructing an error-free data set is presented and a numerical method based upon bootstrap pairs, incorporating two-dimensional Monte Carlo simulation, is introduced to address uncertainty arising from measurement error in selected statistics. When measurement error is a large source of uncertainty, substantial differences between the distribution representing variability of the observed data set and the distribution representing variability of the error-free data set will occur. Furthermore, the shape and range of the probability bands for uncertainty differ between the observed and error-free data set. Failure to separately characterize contributions from random sampling error and measurement error will lead to bias in the variability and uncertainty estimates. However, a key finding is that total uncertainty in mean can be properly quantified even if measurement and random sampling errors cannot be separated. An empirical case study is used to illustrate the application of the methodology.  相似文献   

15.
基于专利地图理论的中国银行业商业方法专利研究   总被引:2,自引:0,他引:2  
邱洪华  金泳锋  余翔 《管理学报》2008,5(3):418-422,453
将专利地图和专利分析相结合,探讨了商业方法专利在中国银行业发展的背景,并对中国银行业商业方法专利进行了详尽的检索。然后,从专利管理图视角对中国银行业专利发展现状、内外资银行商业方法专利发展、商业方法专利申请人数变化以及商业方法专利在内外资银行间的分布状况进行了深入的研究。从专利技术图视角对中国银行业商业方法专利的发展领域、IPC分类以及商业方法专利的授权情况进行了深入的分析。最后,归纳出中国银行业商业方法专利的发展现状。  相似文献   

16.
This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique.  相似文献   

17.
在高频数据条件下,中国ETF基金价格"已实现"波动率与跟踪误差之间是否存在着因果关系并存在着信息的先导效应?基于"已实现"波动、跟踪误差计算方法及Granger因果检验过程、VAR模型等,本文对此进行了深入研究。研究结果认为:中国ETF基金价格"已实现"波动率与两种跟踪误差分别具有Granger因果关系,后者是前者的Granger原因;中国ETF基金价格"已实现"波动率序列与两种跟踪误差序列的同期及一、二阶滞后相关性较高,而跟踪误差滞后于"已实现"波动率;当ETF基金的跟踪误差受外部市场条件的某一冲击后,将给ETF基金价格"已实现"波动率带来同向的冲击,这一冲击具有一定的持续性和滞后性。  相似文献   

18.
Electricity consumption forecasting has been always playing a vital role in power system management and planning. Inaccurate prediction may cause wastes of scarce energy resource or electricity shortages. However, forecasting electricity consumption has proven to be a challenging task due to various unstable factors. Especially, China is undergoing a period of economic transition, which highlights this difficulty. This paper proposes a time-varying-weight combining method, i.e. High-order Markov chain based Time-varying Weighted Average (HM-TWA) method to predict the monthly electricity consumption in China. HM-TWA first calculates the in-sample time-varying combining weights by quadratic programming for the individual forecasts. Then it predicts the out-of-sample time-varying adaptive weights through extrapolating these in-sample weights using a high-order Markov chain model. Finally, the combined forecasts can be obtained. In addition, to ensure that the sample data have the same properties as the required forecasts, a reasonable multi-step-ahead forecasting scheme is designed for HM-TWA. The out-of-sample forecasting performance evaluation shows that HM-TWA outperforms the component models and traditional combining methods, and its effectiveness is further verified by comparing it with some other existing models.  相似文献   

19.
决策分析的情景树方法及其应用   总被引:2,自引:0,他引:2  
本文结合一个实例介绍了贝叶斯决策问题的情景树表示和求解.在情景树中,不需要计算条件概率,仅需要每条路径的联合概率.给出了求解情景树的方法--删除法,并将情景树方法与决策树方法进行了比较分析.  相似文献   

20.
组织承诺影响因素的模拟实验研究   总被引:19,自引:0,他引:19  
与传统问卷调查的研究方法不同,本研究采用情景模拟实验的方法来探讨影响组织承诺的各种因素,以及组织承诺对离职等各种结果变量的影响。本研究的结果和方法进一步加深了人们对组织承诺以及情景模拟实验的理解。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号