首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well‐known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates.  相似文献   

2.
Estimating structural models is often viewed as computationally difficult, an impression partly due to a focus on the nested fixed‐point (NFXP) approach. We propose a new constrained optimization approach for structural estimation. We show that our approach and the NFXP algorithm solve the same estimation problem, and yield the same estimates. Computationally, our approach can have speed advantages because we do not repeatedly solve the structural equation at each guess of structural parameters. Monte Carlo experiments on the canonical Zurcher bus‐repair model demonstrate that the constrained optimization approach can be significantly faster.  相似文献   

3.
The use of table saws in the United States is associated with approximately 28,000 emergency department (ED) visits and 2,000 cases of finger amputation per year. This article provides a quantitative estimate of the economic benefits of automatic protection systems that could be designed into new table saw products. Benefits are defined as reduced health‐care costs, enhanced production at work, and diminished pain and suffering. The present value of the benefits of automatic protection over the life of the table saw are interpreted as the switch‐point cost value, the maximum investment in automatic protection that can be justified by benefit‐cost comparison. Using two alternative methods for monetizing pain and suffering, the study finds switch‐point cost values of $753 and $561 per saw. These point estimates are sensitive to the values of inputs, especially the average cost of injury. The various switch‐point cost values are substantially higher than rough estimates of the incremental cost of automatic protection systems. Uncertainties and future research needs are discussed.  相似文献   

4.
Probabilistic risk analyses often construct multistage chance trees to estimate the joint probability of compound events. If random measurement error is associated with some or all of the estimates, we show that resulting estimates of joint probability may be highly skewed. Joint probability estimates based on the analysis of multistage chance trees are more likely than not to be below the true probability of adverse events, but will sometimes substantially overestimate them. In contexts such as insurance markets for environmental risks, skewed distributions of risk estimates amplify the "winner's curse" so that the estimated risk premium for low-probability events is likely to be lower than the normative value. Skewness may result even in unbiased estimators of expected value from simple lotteries, if measurement error is associated with both the probability and pay-off terms. Further, skewness may occur even if the error associated with these two estimates is symmetrically distributed. Under certain circumstances, skewed estimates of expected value may result in risk-neutral decisionmakers exhibiting a tendency to choose a certainty equivalent over a lottery of equal expected value, or vice versa. We show that when distributions of estimates of expected value are, positively skewed, under certain circumstances it will be optimal to choose lotteries with nominal values lower than the value of apparently superior certainty equivalents. Extending the previous work of Goodman (1960), we provide an exact formula for the skewness of products.  相似文献   

5.
We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm‐exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the “tail” (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit‐cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long‐term benefit‐cost tradeoff is positive for virtually every entity, we acknowledge short‐term disincentives to such an effort.  相似文献   

6.
This paper studies two‐sided matching markets with non‐transferable utility when the number of market participants grows large. We consider a model in which each agent has a random preference ordering over individual potential matching partners, and agents' types are only partially observed by the econometrician. We show that in a large market, the inclusive value is a sufficient statistic for an agent's endogenous choice set with respect to the probability of being matched to a spouse of a given observable type. Furthermore, while the number of pairwise stable matchings for a typical realization of random utilities grows at a fast rate as the number of market participants increases, the inclusive values resulting from any stable matching converge to a unique deterministic limit. We can therefore characterize the limiting distribution of the matching market as the unique solution to a fixed‐point condition on the inclusive values. Finally we analyze identification and estimation of payoff parameters from the asymptotic distribution of observable characteristics at the level of pairs resulting from a stable matching.  相似文献   

7.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

8.
Utility systems such as power and communication systems regularly experience significant damage and loss of service during hurricanes. A primary damage mode for these systems is failure of wooden utility poles that support conductors and communication lines. In this article, we present an approach for combining structural reliability models for utility poles with observed data on pole performance during past hurricanes. This approach, based on Bayesian updating, starts from an imperfect but informative prior and updates this prior with observed performance data. We consider flexural and foundation failure mechanisms in the prior, acknowledging that these are an incomplete, but still informative, subset of the possible failure mechanisms for utility poles during hurricanes. We show how a model‐based prior can be updated with observed failure data, using pole failure data from Hurricane Katrina as a case study. The results of this integration of model‐based estimates and observed performance data then offer a more informative starting point for power system performance estimation for hurricane conditions.  相似文献   

9.
Variability arises due to differences in the value of a quantity among different members of a population. Uncertainty arises due to lack of knowledge regarding the true value of a quantity for a given member of a population. We describe and evaluate two methods for quantifying both variability and uncertainty. These methods, bootstrap simulation and a likelihood-based method, are applied to three datasets. The datasets include a synthetic sample of 19 values from a Lognormal distribution, a sample of nine values obtained from measurements of the PCB concentration in leafy produce, and a sample of five values for the partitioning of chromium in the flue gas desulfurization system of coal-fired power plants. For each of these datasets, we employ the two methods to characterize uncertainty in the arithmetic mean and standard deviation, cumulative distribution functions based upon fitted parametric distributions, the 95th percentile of variability, and the 63rd percentile of uncertainty for the 81st percentile of variability. The latter is intended to show that it is possible to describe any point within the uncertain frequency distribution by specifying an uncertainty percentile and a variability percentile. Using the bootstrap method, we compare results based upon use of the method of matching moments and the method of maximum likelihood for fitting distributions to data. Our results indicate that with only 5–19 data points as in the datasets we have evaluated, there is substantial uncertainty based upon random sampling error. Both the boostrap and likelihood-based approaches yield comparable uncertainty estimates in most cases.  相似文献   

10.
We estimate the country-level risk of extreme wildfires defined by burned area (BA) for Mediterranean Europe and carry out a cross-country comparison. To this end, we avail of the European Forest Fire Information System (EFFIS) geospatial data from 2006 to 2019 to perform an extreme value analysis. More specifically, we apply a point process characterization of wildfire extremes using maximum likelihood estimation. By modeling covariates, we also evaluate potential trends and correlations with commonly known factors that drive or affect wildfire occurrence, such as the Fire Weather Index as a proxy for meteorological conditions, population density, land cover type, and seasonality. We find that the highest risk of extreme wildfires is in Portugal (PT), followed by Greece (GR), Spain (ES), and Italy (IT) with a 10-year BA return level of 50'338 ha, 33'242 ha, 25'165 ha, and 8'966 ha, respectively. Coupling our results with existing estimates of the monetary impact of large wildfires suggests expected losses of 162–439 million € (PT), 81–219 million € (ES), 41–290 million € (GR), and 18–78 million € (IT) for such 10-year return period events.

SUMMARY

We model the risk of extreme wildfires for Italy, Greece, Portugal, and Spain in form of burned area return levels, compare them, and estimate expected losses.  相似文献   

11.
本文运用参数稳定性检验方法研究我国通货膨胀率的动态变化路径,发现我国通货膨胀率序列具有明显的结构转变特征;利用包含结构转变点的最小二乘估计方法,获得了我国通货膨胀率的结构转变点估计和区间估计;结合我国宏观经济运行事实,分析并刻画了具有结构转变特征的通货膨胀率动态过程,准确地给出自1984年以来的两次高通货膨胀区间.  相似文献   

12.
David G. Hudak 《Risk analysis》1994,14(6):1025-1031
The common problem in risk analysis of correctly specifying a probability distribution about an estimate in situations when few data are available is examined. In the absence of data, experts are sometimes used to give a lowest and highest conceivable estimate. Triangular distributions are well suited for these situations when only a low, high, and most likely estimate are given. A problem, however, exists from the failure to adjust for biases when estimating extreme values. Various types of biases, which narrow the range of extreme estimates, are explored. A method is suggested for accounting for these biases by placing extreme estimates at specified percentile points rather than endpoints of a triangular distribution. Since most Monte Carlo models require end points of a triangular distribution, a closed-form expression for identifying the end points given two percentile points and a most likely point is derived. This method has been used extensively in developing cost risk estimates for the Ballistic Missile Defense Organization (BMDO).  相似文献   

13.
E. S. Levine 《Risk analysis》2012,32(2):294-303
Many analyses conducted to inform security decisions depend on estimates of the conditional probabilities of different attack alternatives. These probabilities are difficult to estimate since analysts have limited access to the adversary and limited knowledge of the adversary’s utility function, so subject matter experts often provide the estimates through direct elicitation. In this article, we describe a method of using uncertainty in utility function value tradeoffs to model the adversary’s decision process and solve for the conditional probabilities of different attacks in closed form. The conditional probabilities are suitable to be used as inputs to probabilistic risk assessments and other decision support techniques. The process we describe is an extension of value‐focused thinking and is broadly applicable, including in general business decision making. We demonstrate the use of this technique with simple examples.  相似文献   

14.
The history of polio vaccination in the United States spans 50 years and includes different phases of the disease, multiple vaccines, and a sustained significant commitment of resources. We estimated cost-effectiveness ratios and assessed the net benefits of polio vaccination applicable at various points in time from the societal perspective and we discounted these back to appropriate points in time. We reconstructed vaccine price data from available sources and used these to retrospectively estimate the total costs of the U.S. historical polio vaccination strategies (all costs reported in year 2002 dollars). We estimate that the United States invested approximately US dollars 35 billion (1955 net present value, discount rate of 3%) in polio vaccines between 1955 and 2005 and will invest approximately US dollars 1.4 billion (1955 net present value, or US dollars 6.3 billion in 2006 net present value) between 2006 and 2015 assuming a policy of continued use of inactivated poliovirus vaccine (IPV) for routine vaccination. The historical and future investments translate into over 1.7 billion vaccinations that prevent approximately 1.1 million cases of paralytic polio and over 160,000 deaths (1955 net present values of approximately 480,000 cases and 73,000 deaths). Due to treatment cost savings, the investment implies net benefits of approximately US dollars 180 billion (1955 net present value), even without incorporating the intangible costs of suffering and death and of averted fear. Retrospectively, the U.S. investment in polio vaccination represents a highly valuable, cost-saving public health program. Observed changes in the cost-effectiveness ratio estimates over time suggest the need for living economic models for interventions that appropriately change with time. This article also demonstrates that estimates of cost-effectiveness ratios at any single time point may fail to adequately consider the context of the investment made to date and the importance of population and other dynamics, and shows the importance of dynamic modeling.  相似文献   

15.
This study investigates whether board characteristics affect the value relevance of fair value estimates in financial firms under International Financial Reporting Standard (IFRS) 13. Specifically, the study will focus on whether a better and more efficient monitoring of managers, after the adoption of this new regulation, has an effect on the information quality of fair values. IFRS 13 requires firms to disclose a fair value hierarchy containing three levels: Level 1 (quoted prices in active markets), Level 2 (inputs other than quoted prices that are observable either directly or indirectly) and Level 3 (unobservable inputs generated by entities). The results indicate that, in a post IFRS 13 era, board independence and gender diversity have a positive effect on the value relevance of fair value estimates (Level 3). In addition, firms with larger boards have lower information quality of firm-generated fair value estimates. Moreover, initial analysis shows that all fair values are value relevant to investors and the adoption of IFRS 13 has blurred the lines between the three levels in the fair value hierarchy. Hence, IFRS 13 has successfully reduced the information asymmetry related to fair value estimates.  相似文献   

16.
Ethylene oxide (EO) has been identified as a carcinogen in laboratory animals. Although the precise mechanism of action is not known, tumors in animals exposed to EO are presumed to result from its genotoxicity. The overall weight of evidence for carcinogenicity from a large body of epidemiological data in the published literature remains limited. There is some evidence for an association between EO exposure and lympho/hematopoietic cancer mortality. Of these cancers, the evidence provided by two large cohorts with the longest follow-up is most consistent for leukemia. Together with what is known about human leukemia and EO at the molecular level, there is a body of evidence that supports a plausible mode of action for EO as a potential leukemogen. Based on a consideration of the mode of action, the events leading from EO exposure to the development of leukemia (and therefore risk) are expected to be proportional to the square of the dose. In support of this hypothesis, a quadratic dose-response model provided the best overall fit to the epidemiology data in the range of observation. Cancer dose-response assessments based on human and animal data are presented using three different assumptions for extrapolating to low doses: (1) risk is linearly proportionate to dose; (2) there is no appreciable risk at low doses (margin-of-exposure or reference dose approach); and (3) risk below the point of departure continues to be proportionate to the square of the dose. The weight of evidence for EO supports the use of a nonlinear assessment. Therefore, exposures to concentrations below 37 microg/m3 are not likely to pose an appreciable risk of leukemia in human populations. However, if quantitative estimates of risk at low doses are desired and the mode of action for EO is considered, these risks are best quantified using the quadratic estimates of cancer potency, which are approximately 3.2- to 32-fold lower, using alternative points of departure, than the linear estimates of cancer potency for EO. An approach is described for linking the selection of an appropriate point of departure to the confidence in the proposed mode of action. Despite high confidence in the proposed mode of action, a small linear component for the dose-response relationship at low concentrations cannot be ruled out conclusively. Accordingly, a unit risk value of 4.5 x 10(-8) (microg/m3)(-1) was derived for EO, with a range of unit risk values of 1.4 x 10(-8) to 1.4 x 10(-7) (microg/m3)(-1) reflecting the uncertainty associated with a theoretical linear term at low concentrations.  相似文献   

17.
In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user‐chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton–Raphson step in the direction of the efficient GMM estimator.  相似文献   

18.
慕银平 《管理学报》2011,8(6):885-889
研究了企业发放交叉折扣券情形下的产品定价及折扣券面值的联合决策问题。通过构建利润最大化模型,分析了2种最常见的交叉折扣券(包装内交叉折扣券和包装上交叉折扣券)的最优面值和产品定价问题。运用基本的价格优化方法得出了发放包装上折扣券时的折扣券面值(承载品牌和目标品牌)和产品价格均高于发放包装内折扣券时的折扣券面值和产品价格的结论。同时,发放包装上交叉折扣券能为企业带来更大的利润。最后,运用MAT-LAB数值试验对折扣券面值、产品定价和企业利润进行了敏感性分析,发现包装上交叉折扣券较包装内交叉折扣券对参考价格的变化更为敏感。  相似文献   

19.
The Texas Commission on Environmental Quality (TCEQ) has developed an inhalation unit risk factor (URF) for 1,3-butadiene based on leukemia mortality in an updated epidemiological study on styrene-butadiene rubber production workers conducted by researchers at the University of Alabama at Birmingham. Exposure estimates were updated and an exposure estimate validation study as well as dose-response modeling were conducted by these researchers. This information was not available to the U.S. Environmental Protection Agency when it prepared its health assessment of 1,3-butadiene in 2002. An extensive analysis conducted by TCEQ discusses dose-response modeling, estimating risk for the general population from occupational workers, estimating risk for potentially sensitive subpopulations, effect of occupational exposure estimation error, and use of mortality rates to predict incidence. The URF is 5.0 × 10−7 per μg/m3 or 1.1 × 10−6 per ppb and is based on a Cox regression dose-response model using restricted continuous data with age as a covariate, and a linear low-dose extrapolation default approach using the 95% lower confidence limit as the point of departure. Age-dependent adjustment factors were applied to account for possible increased susceptibility for early life exposure. The air concentration at 1 in 100,000 excess leukemia mortality, the no-significant-risk level, is 20 μg/m3 (9.1 ppb), which is slightly lower than the TCEQ chronic reference value of 33 μg/m3 (15 ppb) protective of ovarian atrophy. These values will be used to evaluate ambient air monitoring data so the general public is protected against adverse health effects from chronic exposure to 1,3-butadiene.  相似文献   

20.
We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the dynamic programming (DP) solution algorithm with the Bayesian Markov chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though the number of grid points on the state variable is small per solution‐estimation iteration, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the “curse of dimensionality.” We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号