In this paper we compare expectations derived from 10 different human physiologically based pharmacokinetic models for perchloroethylene with data on absorption via inhalation, and concentrations in alveolar air and venous blood. Our most interesting finding is that essentially all of the models show a time pattern of departures of predictions of air and blood levels relative to experimental data that might be corrected by more sophisticated model structures incorporating either (a) heterogeneity of the fat compartment (with respect to either perfusion or partition coefficients or both) or (b) intertissue diffusion of perchloroethylene between the fat and muscle/VRG groups. Similar types of corrections have recently been proposed to reduce analogous anomalies in the fits of pharmacokinetic models to the data for several volatile anesthetics.(17-20) A second finding is that models incorporating resting values for alveolar ventilation in the region of 5.4 L/min seemed to be most compatible with the most reliable set of perchloroethylene uptake data. 相似文献
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP). 相似文献
Multi-criteria inventory classification groups inventory items into classes, each of which is managed by a specific re-order policy according to its priority. However, the tasks of inventory classification and control are not carried out jointly if the classification criteria and the classification approach are not robustly established from an inventory-cost perspective. Exhaustive simulations at the single item level of the inventory system would directly solve this issue by searching for the best re-order policy per item, thus achieving the subsequent optimal classification without resorting to any multi-criteria classification method. However, this would be very time-consuming in real settings, where a large number of items need to be managed simultaneously.
In this article, a reduction in simulation effort is achieved by extracting from the population of items a sample on which to perform an exhaustive search of best re-order policies per item; the lowest cost classification of in-sample items is, therefore, achieved. Then, in line with the increasing need for ICT tools in the production management of Industry 4.0 systems, supervised classifiers from the machine learning research field (i.e. support vector machines with a Gaussian kernel and deep neural networks) are trained on these in-sample items to learn to classify the out-of-sample items solely based on the values they show on the features (i.e. classification criteria). The inventory system adopted here is suitable for intermittent demands, but it may also suit non-intermittent demands, thus providing great flexibility. The experimental analysis of two large datasets showed an excellent accuracy, which suggests that machine learning classifiers could be implemented in advanced inventory classification systems. 相似文献
The characterization and treatment of uncertainty poses special challenges when modeling indeterminate or complex coupled systems such as those involved in the interactions between human activity, climate and the ecosystem. Uncertainty about model structure may become as, or more important than, uncertainty about parameter values. When uncertainty grows so large that prediction or optimization no longer makes sense, it may still be possible to use the model as a behavioral test bed to examine the relative robustness of alternative observational and behavioral strategies. When models must be run into portions of their phase space that are not well understood, different submodels may become unreliable at different rates. A common example involves running a time stepped model far into the future. Several strategies can be used to deal with such situations. The probability of model failure can be reported as a function of time. Possible alternative surprises can be assigned probabilities, modeled separately, and combined. Finally, through the use of subjective judgments, one may be able to combine, and over time shift between models, moving from more detailed to progressively simpler order-of-magnitude models, and perhaps ultimately, on to simple bounding analysis. 相似文献
This paper focuses on the relationship between experiential and statistical uncertainties in the timing of births in Cameroon (Central Africa). Most theories of fertility level and change emphasize the emergence of parity-specific control, treating desired family size as both central, and stable across the life course. By contrast, this paper argues for a theory of reproduction that emphasizes process, social context, and contingency. The paper concentrates on the second birth interval, showing that it is longer and more variable among educated than among uneducated women. The paper argues that this difference is due to the specific forms of uncertainty associated with education in contemporary Cameroon. 相似文献
Interdependency analysis in the context of this article is a process of assessing and managing risks inherent in a system of interconnected entities (e.g., infrastructures or industry sectors). Invoking the principles of input-output (I-O) and decomposition analysis, the article offers a framework for describing how terrorism-induced perturbations can propagate due to interconnectedness. Data published by the Bureau of Economic Analysis Division of the U.S. Department of Commerce is utilized to present applications to serve as test beds for the proposed framework. Specifically, a case study estimating the economic impact of airline demand perturbations to national-level U.S. sectors is made possible using I-O matrices. A ranking of the affected sectors according to their vulnerability to perturbations originating from a primary sector (e.g., air transportation) can serve as important input to risk management. For example, limited resources can be prioritized for the "top-n" sectors that are perceived to suffer the greatest economic losses due to terrorism. In addition, regional decomposition via location quotients enables the analysis of local-level terrorism events. The Regional I-O Multiplier System II (RIMS II) Division of the U.S. Department of Commerce is the agency responsible for releasing the regional multipliers for various geographical resolutions (economic areas, states, and counties). A regional-level case study demonstrates a process of estimating the economic impact of transportation-related scenarios on industry sectors within Economic Area 010 (the New York metropolitan region and vicinities). 相似文献
The attack that occurred on September 11, 2001 was, in the end, the result of a failure to detect and prevent the terrorist operations that hit the United States. The U.S. government thus faces at this time the daunting tasks of first, drastically increasing its ability to obtain and interpret different types of signals of impending terrorist attacks with sufficient lead time and accuracy, and second, improving its ability to react effectively. One of the main challenges is the fusion of information, from different sources (U.S. or foreign), and of different types (electronic signals, human intelligence. etc.). Fusion thus involves two very distinct and separate issues: communications, i.e., ensuring that the different U.S. and foreign intelligence agencies communicate all relevant and accurate information in a timely fashion and, perhaps more difficult, merging the content of signals, some "sharp" and some "fuzzy," some dependent and some independent into useful information. The focus of this article is on the latter issue, and on the use of the results. In this article, I present a classic probabilistic Bayesian model sometimes used in engineering risk analysis, which can be helpful in the fusion of information because it allows computation of the posterior probability of an event given its prior probability (before the signal is observed) and the quality of the signal characterized by the probabilities of false positive and false negative. Experience suggests that the nature of these errors has been sometimes misunderstood; therefore, I discuss the validity of several possible definitions. 相似文献