首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Supplier reluctance to openly advertise highly discounted products on the Internet has stimulated development of “opaque” name‐Your‐Own‐Price sales channels. Unfortunately (for suppliers), there is significant potential for online consumers to exploit these channels through collaboration in social networks. In this paper, we study three possible forms of consumer collaboration: exchange of bid result information, coordinated bidding, and coordinated bidding with risk pooling. We propose an egalitarian total utility maximizing mechanism for coordination and risk pooling in a bidding club and describe characteristics of consumers for whom participation in the club makes sense. We show that, in the absence of risk pooling, a plausible bidding club strategy using just information exchange gives almost the same benefits to consumers as coordinated bidding. In contrast, coordinated bidding with risk pooling can lead to significantly increased benefits for consumers. The benefits of risk pooling are highest for consumers with a low tolerance to risk. We also demonstrate that suppliers that actively adjust for such strategic consumer behavior can reduce the impact on their businesses and, under some circumstances, even increase revenues.  相似文献   

2.
3.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

4.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

5.
We analyze the benefits of inventory pooling in a multi‐location newsvendor framework. Using a number of common demand distributions, as well as the distribution‐free approximation, we compare the centralized (pooled) system with the decentralized (non‐pooled) system. We investigate the sensitivity of the absolute and relative reduction in costs to the variability of demand and to the number of locations (facilities) being pooled. We show that for the distributions considered, the absolute benefit of risk pooling increases with variability, and the relative benefit stays fairly constant, as long as the coefficient of variation of demand stays in the low range. However, under high‐variability conditions, both measures decrease to zero as the demand variability is increased. We show, through analytical results and computational experiments, that these effects are due to the different operating regimes exhibited by the system under different levels of variability: as the variability is increased, the system switches from the normal operation to the effective and then complete shutdown regimes; the decrease in the benefits of risk pooling is associated with the two latter stages. The centralization allows the system to remain in the normal operation regime under higher levels of variability compared to the decentralized system.  相似文献   

6.
Major accident risks posed by chemical hazards have raised major social concerns in today's China. Land‐use planning has been adopted by many countries as one of the essential elements for accident prevention. This article aims at proposing a method to assess major accident risks to support land‐use planning in the vicinity of chemical installations. This method is based on the definition of risk by the Accidental Risk Assessment Methodology for IndustrieS (ARAMIS) project and it is an expansion application of severity and vulnerability assessment tools. The severity and vulnerability indexes from the ARAMIS methodology are employed to assess both the severity and vulnerability levels, respectively. A risk matrix is devised to support risk ranking and compatibility checking. The method consists of four main steps and is presented in geographical information‐system‐based maps. As an illustration, the proposed method is applied in Dagushan Peninsula, China. The case study indicated that the method could not only aid risk regulations on existing land‐use planning, but also support future land‐use planning by offering alternatives or influencing the plans at the development stage, and thus further enhance the roles and influence of land‐use planning in the accident prevention activities in China.  相似文献   

7.
The devastating impact by Hurricane Sandy (2012) again showed New York City (NYC) is one of the most vulnerable cities to coastal flooding around the globe. The low‐lying areas in NYC can be flooded by nor'easter storms and North Atlantic hurricanes. The few studies that have estimated potential flood damage for NYC base their damage estimates on only a single, or a few, possible flood events. The objective of this study is to assess the full distribution of hurricane flood risk in NYC. This is done by calculating potential flood damage with a flood damage model that uses many possible storms and surge heights as input. These storms are representative for the low‐probability/high‐impact flood hazard faced by the city. Exceedance probability‐loss curves are constructed under different assumptions about the severity of flood damage. The estimated flood damage to buildings for NYC is between US$59 and 129 millions/year. The damage caused by a 1/100‐year storm surge is within a range of US$2 bn–5 bn, while this is between US$5 bn and 11 bn for a 1/500‐year storm surge. An analysis of flood risk in each of the five boroughs of NYC finds that Brooklyn and Queens are the most vulnerable to flooding. This study examines several uncertainties in the various steps of the risk analysis, which resulted in variations in flood damage estimations. These uncertainties include: the interpolation of flood depths; the use of different flood damage curves; and the influence of the spectra of characteristics of the simulated hurricanes.  相似文献   

8.
9.
The use of autonomous underwater vehicles (AUVs) for various scientific, commercial, and military applications has become more common with maturing technology and improved accessibility. One relatively new development lies in the use of AUVs for under‐ice marine science research in the Antarctic. The extreme environment, ice cover, and inaccessibility as compared to open‐water missions can result in a higher risk of loss. Therefore, having an effective assessment of risks before undertaking any Antarctic under‐ice missions is crucial to ensure an AUV's survival. Existing risk assessment approaches predominantly focused on the use of historical fault log data of an AUV and elicitation of experts’ opinions for probabilistic quantification. However, an AUV program in its early phases lacks historical data and any assessment of risk may be vague and ambiguous. In this article, a fuzzy‐based risk assessment framework is proposed for quantifying the risk of AUV loss under ice. The framework uses the knowledge, prior experience of available subject matter experts, and the widely used semiquantitative risk assessment matrix, albeit in a new form. A well‐developed example based on an upcoming mission by an ISE‐explorer class AUV is presented to demonstrate the application and effectiveness of the proposed framework. The example demonstrates that the proposed fuzzy‐based risk assessment framework is pragmatically useful for future under‐ice AUV deployments. Sensitivity analysis demonstrates the validity of the proposed method.  相似文献   

10.
Mycobacterium avium subspecies paratuberculosis (MAP) causes chronic inflammation of the intestines in humans, ruminants, and other species. It is the causative agent of Johne's disease in cattle, and has been implicated as the causative agent of Crohn's disease in humans. To date, no quantitative microbial risk assessment (QMRA) for MAP utilizing a dose‐response function exists. The objective of this study is to develop a nested dose‐response model for infection from oral exposure to MAP utilizing data from the peer‐reviewed literature. Four studies amenable to dose‐response modeling were identified in the literature search and optimized to the one‐parameter exponential or two‐parameter beta‐Poisson dose‐response models. A nesting analysis was performed on all permutations of the candidate data sets to determine the acceptability of pooling data sets across host species. Three of four data sets exhibited goodness of fit to at least one model. All three data sets exhibited good fit to the beta‐Poisson model, and one data set exhibited goodness of fit, and best fit, to the exponential model. Two data sets were successfully nested using the beta‐Poisson model with parameters α = 0.0978 and N50 = 2.70 × 102 CFU. These data sets were derived from sheep and red deer host species, indicating successful interspecies nesting, and demonstrate the highly infective nature of MAP. The nested dose‐response model described should be used for future QMRA research regarding oral exposure to MAP.  相似文献   

11.
Kenny S. Crump 《Risk analysis》2017,37(10):1802-1807
In an article recently published in this journal, Bogen(1) concluded that an NRC committee's recommendations that default linear, nonthreshold (LNT) assumptions be applied to dose– response assessment for noncarcinogens and nonlinear mode of action carcinogens are not justified. Bogen criticized two arguments used by the committee for LNT: when any new dose adds to a background dose that explains background levels of risk (additivity to background or AB), or when there is substantial interindividual heterogeneity in susceptibility (SIH) in the exposed human population. Bogen showed by examples that SIH can be false. Herein is outlined a general proof that confirms Bogen's claim. However, it is also noted that SIH leads to a nonthreshold population distribution even if individual distributions all have thresholds, and that small changes to SIH assumptions can result in LNT. Bogen criticizes AB because it only applies when there is additivity to background, but offers no help in deciding when or how often AB holds. Bogen does not contradict the fact that AB can lead to LNT but notes that, even if low‐dose linearity results, the response at higher doses may not be useful in predicting the amount of low‐dose linearity. Although this is theoretically true, it seems reasonable to assume that generally there is some quantitative relationship between the low‐dose slope and the slope suggested at higher doses. Several incorrect or misleading statements by Bogen are noted.  相似文献   

12.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

13.
We study zero‐inventory production‐distribution systems under pool‐point delivery. The zero‐inventory production and distribution paradigm is supported in a variety of industries in which a product cannot be inventoried because of its short shelf life. The advantages of pool‐point (or hub‐and‐spoke) distribution, explored extensively in the literature, include the efficient use of transportation resources and effective day‐to‐day management of operations. The setting of our analysis is as follows: A production facility (plant) with a finite production rate distributes its single product, which cannot be inventoried, to several pool points. Each pool point may require multiple truckloads to satisfy its customers' demand. A third‐party logistics provider then transports the product to individual customers surrounding each pool point. The production rate can be increased up to a certain limit by incurring additional cost. The delivery of the product is done by identical trucks, each having limited capacity and non‐negligible traveling time between the plant and the pool points. Our objective is to coordinate the production and transportation operations so that the total cost of production and distribution is minimized, while respecting the product lifetime and the delivery capacity constraints. This study attempts to develop intuition into zero‐inventory production‐distribution systems under pool‐point delivery by considering several variants of the above setting. These include multiple trucks, a modifiable production rate, and alternative objectives. Using a combination of theoretical analysis and computational experiments, we gain insights into optimizing the total cost of a production‐delivery plan by understanding the trade‐off between production and transportation.  相似文献   

14.
在行为金融范式下,以DHS(Daniel-Hirshleifer-Subrahmanyam)模型描述了ISC(集成供应链)参与者的过度自信心理,建立了含有过度自信心理的ISC需求均衡模型;并且,从损失概率和期望损失两个角度,基于认知风险度量揭示了ISC牛鞭效应的作用机理。“有限理性人”更符合ISC系统运作的实际情形,本研究为在行为金融范式下度量ISC牛鞭效应提供了一种新的思路。  相似文献   

15.
This article examines the relationship between values and risk perceptions regarding terror attacks. The participants in the study are university students from Turkey (n = 536) and Israel (n = 298). Schwartz value theory (1992, 1994) is applied to conceptualize and measure values. Cognitive (perceived likelihood and perceived severity) and emotional (fear, helplessness, anger, distress, insecurity, hopelessness, sadness, and anxiety) responses about the potential of (i) being personally exposed to a terror attack, and (ii) a terror attack that may occur in one's country are assessed to measure risk perceptions. Comparison of the two groups suggests that the Turkish participants are significantly more emotional about terror risks than the Israeli respondents. Both groups perceive the risk of a terror attack that may occur in their country more likely than the risk of being personally exposed to a terror attack. No significant differences are found in emotional representations and perceived severity ratings regarding these risks. Results provide support for the existence of a link between values and risk perceptions of terror attacks. In both countries, self‐direction values are negatively related to emotional representations, whereas security values are positively correlated with emotions; hedonism and stimulation values are negatively related to perceived likelihood. Current findings are discussed in relation to previous results, theoretical approaches (the social amplification of risk framework and cultural theory of risk), and practical implications (increasing community support for a course of action, training programs for risk communicators).  相似文献   

16.
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.  相似文献   

17.
Operators of long field‐life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end‐of‐supply evaluation methods are focused mostly on the downstream supply chain, which is of interest mainly to spare part manufacturers. Firms that purchase spare parts have limited information on parts sales, and indicators of end‐of‐supply risk can also be found in the upstream supply chain. This article proposes a methodology for firms purchasing spare parts to manage end‐of‐supply risk by utilizing proportional hazard models in terms of supply chain conditions of the parts. The considered risk indicators fall into four main categories, of which two are related to supply (price and lead time) and two others are related to demand (cycle time and throughput). The methodology is demonstrated using data on about 2,000 spare parts collected from a maintenance repair organization in the aviation industry. Cross‐validation results and out‐of‐sample risk assessments show good performance of the method to identify spare parts with high end‐of‐supply risk. Further validation is provided by survey results obtained from the maintenance repair organization, which show strong agreement between the firm's and the model's identification of high‐risk spare parts.  相似文献   

18.
Advance selling through pre‐orders is a strategy to transfer inventory risk from a retailer to consumers. A newsvendor retailer can have three strategies to choose from: no advance selling allowed (NAS), moderate advance selling with a moderate discount for pre‐orders (MAS), and deep advance selling with a deep discount for pre‐orders (DAS). This research studies how a retailer could design an advance selling strategy to maximize her own profits. We find some interesting results. For example, there exist two thresholds for the selling season profit margin and two thresholds for consumer's expected valuation. For products with higher profit margin than the high threshold on profit margin, a retailer should always use DAS. For products with medium profit margin within the two thresholds, a retailer should adopt MAS if consumer's expected valuation is lower than the high threshold and use DAS otherwise. For products with lower profit margin than the low threshold, a retailer should use NAS, DAS, or MAS, respectively, if consumer's expected valuation is lower than the low threshold, higher than the high threshold, or between the two thresholds, respectively. Through sensitivity analyses, we also show the effects of multiple consumer characteristics on a retailer's optimal advance selling strategy.  相似文献   

19.
Andrea Herrmann 《Risk analysis》2013,33(8):1510-1531
How well can people estimate IT‐related risk? Although estimating risk is a fundamental activity in software management and risk is the basis for many decisions, little is known about how well IT‐related risk can be estimated at all. Therefore, we executed a risk estimation experiment with 36 participants. They estimated the probabilities of IT‐related risks and we investigated the effect of the following factors on the quality of the risk estimation: the estimator's age, work experience in computing, (self‐reported) safety awareness and previous experience with this risk, the absolute value of the risk's probability, and the effect of knowing the estimates of the other participants (see: Delphi method). Our main findings are: risk probabilities are difficult to estimate. Younger and inexperienced estimators were not significantly worse than older and more experienced estimators, but the older and more experienced subjects better used the knowledge gained by knowing the other estimators' results. Persons with higher safety awareness tend to overestimate risk probabilities, but can better estimate ordinal ranks of risk probabilities. Previous own experience with a risk leads to an overestimation of its probability (unlike in other fields like medicine or disasters, where experience with a disease leads to more realistic probability estimates and nonexperience to an underestimation).  相似文献   

20.
As flood risks grow worldwide, a well‐designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood‐loss‐sharing program involving private insurance based on location‐specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS‐based flood model and a stochastic optimization procedure with respect to location‐specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile‐related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号