首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

2.
An assessment of military logistics planning models offers a great deal of information about the modelling state of art. Such tools include both analytic and simulation model types. They can deal with both static and dynamic characteristics of the environment. They require highly detailed data for their operation and can compute over a large number of interacting variables. Unfortunately, these models do not satisfy adequately the requirements of the particular logistics issue for which they were assessed, which is whether such models can be used in early logistics planning for new weapon systems. One difficulty is that such planning must make extensive use of tradeoffs and sensitivity analysis to take account of the flexibility and uncertainties existing at the early stages. Another is that the existent models call for detailed data which is usually not available at that time. Therefore, although the models do fulfill a particular useful planning function, they must be replaced or augmented by a new class of models that will much more closely satisfy the planning need. This new capability requires a serious research effort that will benefit not only the military logistics planners, but also other planners dealing with large capital development programs.  相似文献   

3.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

4.
5.
This paper reviews the effects of generally controllable factors such as physical conditioning, ambient temperature, and amount of prior sleep in adjustment to night work periods and shift work. One can expect a 5-10% decline in capacity for work in nocturnal work periods as compared to daytime work periods. This decreased capacity would dissipate if workers maintained a consistent sleep-wake routine for 8-16 days after moving to a new shift. Proven means for accelerating this adaptation are currently not available. The ability to perform work also declines as the length of the work period increases but depending upon individual tasks. Physical conditioning improves mood and general well-being, but no strong evidence currently indicates that conditioning increases tolerance or adjustment to shift work.

Increased ambient temperature increases the stress of work, although studies do not address heat as a factor in adjustment to shift work. While lower nocturnal temperature would be assumed to reduce heat stress during night shifts, supporting data does not exist. Studies have not addressed the negative consequences of cold stress or of rotating from night to day shifts with added heat stress.

The proper use of short sleep periods either as preparation for or as a response to a shift change can ameliorate some effects of shift rotation. Data indicate that performance on 'graveyard' shifts can be maintained close to baseline levels following true prophylactic naps while performance may decline by up to 30% when such naps are not taken. While there is evidence that naps or even rest periods without sleep are beneficial in improving mood in normal young adults, these data do not apply to 'replacement' naps. Studies of interjected naps imply that such naps do reduce sleep debt but do not imply that such naps are more beneficial than longer sleep periods. Naps appear to be most advantageous when the accumulated sleep debt is least.  相似文献   

6.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

7.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   

8.
This paper investigates belief learning. Unlike other investigators who have been forced to use observable proxies to approximate unobserved beliefs, we have, using a belief elicitation procedure (proper scoring rule), elicited subject beliefs directly. As a result we were able to perform a more direct test of the proposition that people behave in a manner consistent with belief learning. What we find is interesting. First to the extent that subjects tend to “belief learn,” the beliefs they use are the stated beliefs we elicit from them and not the “empirical beliefs” posited by fictitious play or Cournot models. Second, we present evidence that the stated beliefs of our subjects differ dramatically, both quantitatively and qualitatively, from the type of empirical or historical beliefs usually used as proxies for them. Third, our belief elicitation procedures allow us to examine how far we can be led astray when we are forced to infer the value of parameters using observable proxies for variables previously thought to be unobservable. By transforming a heretofore unobservable into an observable, we can see directly how parameter estimates change when this new information is introduced. Again, we demonstrate that such differences can be dramatic. Finally, our belief learning model using stated beliefs outperforms both a reinforcement and EWA model when all three models are estimated using our data.  相似文献   

9.
The accounting for pension obligations is based upon numerous parameters, whose future developments must be forecasted for valuation purposes. If the actual realizations of theses parameters deviate from their original estimations there are generated so-called actuarial gains and losses. IAS 19 provides three alternative accounting treatments with respect to these actuarial gains and losses. In particular the equity and the corridor approach, both frequently used in practice, implicitly assume that the actuarial gains and losses offset each other in the long run. However different studies have demonstrated by using the Monte-Carlo-Simulation-technique that this long-term offset is not assured. But those studies do not propose a rationale for the observed generation of systematic actuarial gains and losses. The present paper provides an analytic rationale for the systematic appearance of actuarial gains and losses and comes to the following conclusion: The long-term offset of actuarial gains and losses is assured only if the parameters that must be estimated for valuation purposes are independent. However if there is an interdependence between those parameters, which seems to be a sound assumption in reality, the offset of actuarial gains and losses is not given. In case of a positive correlation actuarial losses are generated on a systematic basis, whereas a negative correlation results in actuarial gains.  相似文献   

10.
The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al . Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.  相似文献   

11.
Risk assessors often use different probability plots as a way to assess the fit of a particular distribution or model by comparing the plotted points to a straight line and to obtain estimates of the parameters in parametric distributions or models. When empirical data do not fall in a sufficiently straight line on a probability plot, and when no other single parametric distribution provides an acceptable (graphical) fit to the data, the risk assessor may consider a mixture model with two component distributions. Animated probability plots are a way to visualize the possible behaviors of mixture models with two component distributions. When no single parametric distribution provides an adequate fit to an empirical dataset, animated probability plots can help an analyst pick some plausible mixture models for the data based on their qualitative fit. After using animations during exploratory data analysis, the analyst must then use other statistical tools, including but not limited to: Maximum Likelihood Estimation (MLE) to find the optimal parameters, Goodness of Fit (GoF) tests, and a variety of diagnostic plots to check the adequacy of the fit. Using a specific example with two LogNormal components, we illustrate the use of animated probability plots as a tool for exploring the suitability of a mixture model with two component distributions. Animations work well with other types of probability plots, and they may be extended to analyze mixture models with three or more component distributions.  相似文献   

12.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

13.
14.
In recent years physiologically based pharmacokinetic models have come to play an increasingly important role in risk assessment for carcinogens. The hope is that they can help open the black box between external exposure and carcinogenic effects to experimental observations, and improve both high-dose to low-dose and interspecies projections of risk. However, to date, there have been only relatively preliminary efforts to assess the uncertainties in current modeling results. In this paper we compare the physiologically based pharmacokinetic models (and model predictions of risk-related overall metabolism) that have been produced by seven different sets of authors for perchloroethylene (tetrachloroethylene). The most striking conclusion from the data is that most of the differences in risk-related model predictions are attributable to the choice of the data sets used for calibrating the metabolic parameters. Second, it is clear that the bottom-line differences among the model predictions are appreciable. Overall, the ratios of low-dose human to bioassay rodent metabolism spanned a 30-fold range for the six available human/rat comparisons, and the seven predicted ratios of low-dose human to bioassay mouse metabolism spanned a 13-fold range. (The greater range for the rat/human comparison is attributable to a structural assumption by one author group of competing linear and saturable pathways, and their conclusion that the dangerous saturable pathway constitutes a minor fraction of metabolism in rats.) It is clear that there are a number of opportunities for modelers to make different choices of model structure, interpretive assumptions, and calibrating data in the process of constructing pharmacokinetic models for use in estimating "delivered" or "biologically effective" dose for carcinogenesis risk assessments. We believe that in presenting the results of such modeling studies, it is important for researchers to explore the results of alternative, reasonably likely approaches for interpreting the available data--and either show that any conclusions they make are relatively insensitive to particular interpretive choices, or to acknowledge the differences in conclusions that would result from plausible alternative views of the world.  相似文献   

15.
16.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

17.
18.
When making business decisions, people generally receive some form of guidance. Often, this guidance might be in the form of instructions about which inputs to the decision are most important. Alternatively, it might be outcome feedback concerning the appropriateness of their decisions. When people receive guidance in making difficult judgments, it is important that they do not confuse this guidance with insight into their own decision models. This study examined whether people confuse their actual decision model with task information and outcome feedback. Subjects predicted the likelihood that various hypothetical companies would experience financial distress and then reported the decision models they believed they had used. Their reported models were compared with their actual models as estimated by a regression of the subjects' predictions on the inputs to their decisions. In a 2times2 factorial design, some subjects were provided with task information regarding the relative importance of each input to their decisions while others were not. Some subjects were provided with outcome feedback regarding the quality of their decisions while others were not. The subjects tended to confuse the task information and outcome feedback with their actual decision models. Implications for the results are discussed.  相似文献   

19.
20.
本文讨论了考虑事件风险的资产的在险价值方法,并以此对上海股票指数作了实证研究。这种方法用跳跃来描述事件风险,用跳跃-扩散过程来描述收益率过程。通过模拟退火算法来估计模型参数,利用随机模拟方法求得资产收益率的模拟分布,进而计算组合的在险价值。通过对上海指数的实证研究表明,资产的事件风险是不可忽略的,考虑事件风险的在险价值更加合理。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号