首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

2.
Empirical studies using survey data on expectations have frequently observed that forecasts are biased and have concluded that agents are not rational. We establish that existing rationality tests are not robust to even small deviations from symmetric loss and hence have little ability to tell whether the forecaster is irrational or the loss function is asymmetric. We quantify the trade‐off between forecast inefficiency and asymmetric loss leading to identical outcomes of standard rationality tests and explore new and more general methods for testing forecast rationality jointly with flexible families of loss functions that embed squared loss as a special case. Empirical applications to survey data on forecasts of real output growth and inflation suggest that rejections of rationality may largely have been driven by the assumption of squared loss. Moreover, our results suggest that agents are averse to “bad” outcomes such as lower‐than‐expected real output growth and higher‐than‐expected inflation and that they incorporate such loss aversion into their forecasts. (JEL: C22, C53, E37)  相似文献   

3.
《Omega》2004,32(1):31-39
This paper aims to examine potential differences in perceived usefulness of various forecasting formats from the perspectives of providers and users of predictions. Experimental procedure consists of asking participants to assume the role of forecast providers and to construct forecasts using different formats, followed by requesting usefulness ratings for these formats (Phase 1). Usefulness of the formats are rated again in hindsight after receiving individualized performance feedback (Phase 2). In the ensuing role switch exercise, given new series and external predictions, participants are required to assign usefulness ratings as forecast users (Phase 3). In the last phase, participants are given performance feedback and asked to rate the usefulness in hindsight as users of predictions (Phase 4). Results reveal that regardless of the forecasting role, 95% prediction intervals are considered to be the most useful format, followed by directional predictions, 50% interval forecasts, and lastly, point forecasts. Finally, for all formats and for both roles, usefulness in hindsight is found to be lower than usefulness prior to performance feedback presentation.  相似文献   

4.
The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to “carcinogenic to humans.” The Diesel Exhaust in Miners Study (DEMS) cohort and nested case‐control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC's determination. We conducted a reanalysis of the DEMS case‐control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE.  相似文献   

5.
《Risk analysis》2018,38(8):1672-1684
A disease burden (DB) evaluation for environmental pathogens is generally performed using disability‐adjusted life years with the aim of providing a quantitative assessment of the health hazard caused by pathogens. A critical step in the preparation for this evaluation is the estimation of morbidity between exposure and disease occurrence. In this study, the method of a traditional dose–response analysis was first reviewed, and then a combination of the theoretical basis of a “single‐hit” and an “infection‐illness” model was performed by incorporating two critical factors: the “infective coefficient” and “infection duration.” This allowed a dose–morbidity model to be built for direct use in DB calculations. In addition, human experimental data for typical intestinal pathogens were obtained for model validation, and the results indicated that the model was well fitted and could be further used for morbidity estimation. On this basis, a real case of a water reuse project was selected for model application, and the morbidity as well as the DB caused by intestinal pathogens during water reuse was evaluated. The results show that the DB attributed to Enteroviruses was significant, while that for enteric bacteria was negligible. Therefore, water treatment technology should be further improved to reduce the exposure risk of Enteroviruses . Since road flushing was identified as the major exposure route, human contact with reclaimed water through this pathway should be limited. The methodology proposed for model construction not only makes up for missing data of morbidity during risk evaluation, but is also necessary to quantify the maximum possible DB.  相似文献   

6.
Ought we to take seriously large risks predicted by “exotic” or improbable theories? We routinely assess risks on the basis or either common sense, or some developed theoretical framework based on the best available scientific explanations. Recently, there has been a substantial increase of interest in the low‐probability “failure modes” of well‐established theories, which can involve global catastrophic risks. However, here I wish to discuss a partially antithetical situation: alternative, low‐probability (“small”) scientific theories predicting catastrophic outcomes with large probability. I argue that there is an important methodological issue (determining what counts as the best available explanation in cases where the theories involved describe possibilities of extremely destructive global catastrophes), which has been neglected thus far. There is no simple answer to the correct method for dealing with high‐probability high‐stakes risks following from low‐probability theories that still cannot be rejected outright, and much further work is required in this area. I further argue that cases like these are more numerous than usually assumed, for reasons including cognitive biases, sociological issues in science and the media image of science. If that is indeed so, it might lead to a greater weight of these cases in areas such as moral deliberation and policy making.  相似文献   

7.
《Omega》2002,30(5):381-392
The paper reports a study of the impact on user satisfaction and forecast accuracy of user involvement in the design of a forecasting decision support system (FDSS). Two versions of an FDSS were tested via a laboratory study. Version 1, allowed the user control over all aspects of the system including the “look” of various screen elements and, most importantly, the model to be used could be selected (and tested) from a number of alternative forecasting models provided within the FDSS. In contrast, Version 2 did not allow the user to modify the “look” of the screen, and also provided no opportunity for model selection: this feature was carried out optimally by the FDSS. The user was told the advantage of optimal model selection. Both versions finished by asking the user to either accept the forecast (displayed as a point on the time-series graph) or to modify it via the mouse if unhappy with it. Results showed a much greater satisfaction with the forecasts provided by Version 1, confirming the importance of user involvement. Users of Version 1, in about half the cases, selected poor models with high forecast error. Where a model close to optimal was selected, the accuracy of Version 1 users greatly outperformed low involvement Version 2 users. Overall, however, the accuracy of the final forecasts for users of Version 1 was slightly inferior to that of users of Version 2. Measurements of ease of use and usefulness showed no real differences between the two versions.  相似文献   

8.
9.
We propose a framework for out‐of‐sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine‐tune the forecast selection to current economic conditions. To this end, we propose a two‐step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter‐reduction methods for macroeconomic forecasting using a large number of predictors.  相似文献   

10.
Risk aversion (a second‐order risk preference) is a time‐proven concept in economic models of choice under risk. More recently, the higher order risk preferences of prudence (third‐order) and temperance (fourth‐order) also have been shown to be quite important. While a majority of the population seems to exhibit both risk aversion and these higher order risk preferences, a significant minority does not. We show how both risk‐averse and risk‐loving behaviors might be generated by a simple type of basic lottery preference for either (1) combining “good” outcomes with “bad” ones, or (2) combining “good with good” and “bad with bad,” respectively. We further show that this dichotomy is fairly robust at explaining higher order risk attitudes in the laboratory. In addition to our own experimental evidence, we take a second look at the extant laboratory experiments that measure higher order risk preferences and we find a fair amount of support for this dichotomy. Our own experiment also is the first to look beyond fourth‐order risk preferences, and we examine risk attitudes at even higher orders.  相似文献   

11.
D. Wayne Berman 《Risk analysis》2011,31(8):1308-1326
Given that new protocols for assessing asbestos‐related cancer risk have recently been published, questions arise concerning how they compare to the “IRIS” protocol currently used by regulators. The newest protocols incorporate findings from 20 additional years of literature. Thus, differences between the IRIS and newer Berman and Crump protocols are examined to evaluate whether these protocols can be reconciled. Risks estimated by applying these protocols to real exposure data from both laboratory and field studies are also compared to assess the relative health protectiveness of each protocol. The reliability of risks estimated using the two protocols are compared by evaluating the degree with which each potentially reproduces the known epidemiology study risks. Results indicate that the IRIS and Berman and Crump protocols can be reconciled; while environment‐specific variation within fiber type is apparently due primarily to size effects (not addressed by IRIS), the 10‐fold (average) difference between amphibole asbestos risks estimated using each protocol is attributable to an arbitrary selection of the lowest of available mesothelioma potency factors in the IRIS protocol. Thus, the IRIS protocol may substantially underestimate risk when exposure is primarily to amphibole asbestos. Moreover, while the Berman and Crump protocol is more reliable than the IRIS protocol overall (especially for predicting amphibole risk), evidence is presented suggesting a new fiber‐size‐related adjustment to the Berman and Crump protocol may ultimately succeed in reconciling the entire epidemiology database. However, additional data need to be developed before the performance of the adjusted protocol can be fully validated.  相似文献   

12.
The proliferation of innovative and exciting information technology applications that target individual “professionals” has made the examination or re‐examination of existing technology acceptance theories and models in a “professional” setting increasingly important. The current research represents a conceptual replication of several previous model comparison studies. The particular models under investigation are the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and a decomposed TPB model, potentially adequate in the targeted healthcare professional setting. These models are empirically examined and compared, using the responses to a survey on telemedicine technology acceptance collected from more than 400 physicians practicing in public tertiary hospitals in Hong Kong. Results of the study highlight several plausible limitations of TAM and TPB in explaining or predicting technology acceptance by individual professionals. In addition, findings from the study also suggest that instruments that have been developed and repeatedly tested in previous studies involving end users and business managers in ordinary business settings may not be equally valid in a professional setting. Several implications for technology acceptance/adoption research and technology management practices are discussed.  相似文献   

13.
The author suggests that Futures Research is an indispensible initial step in the Strategic Planning process. For various reasons, including executives' belief that futurism is not “businesslike”, it appears unwarranted to establish a Futures Research group independent of the Strategic Planning department. In view of the repeated and serious predictions of the demise of the corporation, what could be a better Futures Research project than to seek ways of reversing these forecasts?  相似文献   

14.
Observing that patients with longer appointment delays tend to have higher no‐show rates, many providers place a limit on how far into the future that an appointment can be scheduled. This article studies how the choice of appointment scheduling window affects a provider's operational efficiency. We use a single server queue to model the registered appointments in a provider's work schedule, and the capacity of the queue serves as a proxy of the size of the appointment window. The provider chooses a common appointment window for all patients to maximize her long‐run average net reward, which depends on the rewards collected from patients served and the “penalty” paid for those who cannot be scheduled. Using a stylized M/M/1/K queueing model, we provide an analytical characterization for the optimal appointment queue capacity K, and study how it should be adjusted in response to changes in other model parameters. In particular, we find that simply increasing appointment window could be counterproductive when patients become more likely to show up. Patient sensitivity to incremental delays, rather than the magnitudes of no‐show probabilities, plays a more important role in determining the optimal appointment window. Via extensive numerical experiments, we confirm that our analytical results obtained under the M/M/1/K model continue to hold in more realistic settings. Our numerical study also reveals substantial efficiency gains resulted from adopting an optimal appointment scheduling window when the provider has no other operational levers available to deal with patient no‐shows. However, when the provider can adjust panel size and overbooking level, limiting the appointment window serves more as a substitute strategy, rather than a complement.  相似文献   

15.
This study presents an empirical investigation of the effects of size and ownership structure of the firm on the motivations for use of business community involvement practices. The “motivation‐mix” conceptual framework composed by commitment, calculation, conformance and caring motivational mechanisms is used for the conduction of eight comparative case studies. Results indicate that (1) size and ownership structure, per se, do not affect the motivations, and (2) high levels of calculation and low levels of caring are observed in one particular combination of size‐ownership structure: large, publicly held firms.  相似文献   

16.
Product recovery operations in reverse supply chains face rapidly changing demand due to the increasing number of product offerings with reduced lifecycles. Therefore, capacity planning becomes a strategic issue of major importance for the profitability of closed‐loop supply chains. This work studies a closed‐loop supply chain with remanufacturing and presents dynamic capacity planning policies developed through the methodology of System Dynamics. The key issue of the paper is how the lifecycles and return patterns of various products affect the optimal policies regarding expansion and contraction of collection and remanufacturing capacities. The model can be used to identify effective policies, to conduct various “what‐if” analyses, and to answer questions about the long‐term profitability of reverse supply chains with remanufacturing. The results of numerical examples with quite different lifecycle and return patterns show how the optimal collection expansion/contraction and remanufacturing contraction policies depend on the lifecycle type and the average usage time of the product, while the remanufacturing capacity expansion policy is not significantly affected by these factors. The results also show that the collection and remanufacturing capacity policies are insensitive to the total product demand. The insensitivity of the optimal policies to total demand is a particularly appealing feature of the proposed model, given the difficulty in obtaining accurate demand forecasts.  相似文献   

17.
Tucker Burch 《Risk analysis》2019,39(3):599-615
The assumptions underlying quantitative microbial risk assessment (QMRA) are simple and biologically plausible, but QMRA predictions have never been validated for many pathogens. The objective of this study was to validate QMRA predictions against epidemiological measurements from outbreaks of waterborne gastrointestinal disease. I screened 2,000 papers and identified 12 outbreaks with the necessary data: disease rates measured using epidemiological methods and pathogen concentrations measured in the source water. Eight of the 12 outbreaks were caused by Cryptosporidium, three by Giardia, and one by norovirus. Disease rates varied from 5.5 × 10?6 to 1.1 × 10?2 cases/person‐day, and reported pathogen concentrations varied from 1.2 × 10?4 to 8.6 × 102 per liter. I used these concentrations with single‐hit dose–response models for all three pathogens to conduct QMRA, producing both point and interval predictions of disease rates for each outbreak. Comparison of QMRA predictions to epidemiological measurements showed good agreement; interval predictions contained measured disease rates for 9 of 12 outbreaks, with point predictions off by factors of 1.0–120 (median = 4.8). Furthermore, 11 outbreaks occurred at mean doses of less than 1 pathogen per exposure. Measured disease rates for these outbreaks were clearly consistent with a single‐hit model, and not with a “two‐hit” threshold model. These results demonstrate the validity of QMRA for predicting disease rates due to Cryptosporidium and Giardia.  相似文献   

18.
Using the intuition that financial markets transfer risks in business time, “market microstructure invariance” is defined as the hypotheses that the distributions of risk transfers (“bets”) and transaction costs are constant across assets when measured per unit of business time. The invariance hypotheses imply that bet size and transaction costs have specific, empirically testable relationships to observable dollar volume and volatility. Portfolio transitions can be viewed as natural experiments for measuring transaction costs, and individual orders can be treated as proxies for bets. Empirical tests based on a data set of 400,000+ portfolio transition orders support the invariance hypotheses. The constants calibrated from structural estimation imply specific predictions for the arrival rate of bets (“market velocity”), the distribution of bet sizes, and transaction costs.  相似文献   

19.
20.
《Risk analysis》2018,38(10):2222-2241
The human population is forecast to increase by 3–4 billion people during this century and many scientists have expressed concerns that this could increase the likelihood of certain adverse events (e.g., climate change and resource shortages). Recent research shows that these concerns are mirrored in public risk perceptions and that these perceptions correlate with a willingness to adopt mitigation behaviors (e.g., reduce resource consumption) and preventative actions (e.g., support actions to limit growth). However, little research has assessed the factors that influence risk perceptions of global population growth (GPG). To contribute to this important goal, this article presents three studies that examined how risk perceptions of GPG might be influenced by textual‐visual representations (like those in media and Internet articles) of the potential effects of GPG. Study 1 found that a textual narrative that highlighted the potential negative (cf. positive) consequences of GPG led to higher perceived risk and greater willingness to adopt mitigation behaviors, but not to support preventative actions. Notably, the influence of the narratives on perceived risk was largely moderated by the participant's prior knowledge and perceptions of GPG. Contrary to expectations, studies 2 and 3 revealed, respectively, that photographs depicting GPG‐related imagery and graphs depicting GPG rates had no significant effect on the perceived risk of GPG or the willingness to embrace mitigation or preventative actions. However, study 3 found that individuals with higher “graph literacy” perceived GPG as a higher risk and were more willing to adopt mitigation behaviors and support preventative actions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号