首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments.  相似文献   

2.
The widely used estimator of Berry, Levinsohn, and Pakes (1995) produces estimates of consumer preferences from a discrete‐choice demand model with random coefficients, market‐level demand shocks, and endogenous prices. We derive numerical theory results characterizing the properties of the nested fixed point algorithm used to evaluate the objective function of BLP's estimator. We discuss problems with typical implementations, including cases that can lead to incorrect parameter estimates. As a solution, we recast estimation as a mathematical program with equilibrium constraints, which can be faster and which avoids the numerical issues associated with nested inner loops. The advantages are even more pronounced for forward‐looking demand models where the Bellman equation must also be solved repeatedly. Several Monte Carlo and real‐data experiments support our numerical concerns about the nested fixed point approach and the advantages of constrained optimization. For static BLP, the constrained optimization approach can be as much as ten to forty times faster for large‐dimensional problems with many markets.  相似文献   

3.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

4.
We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the dynamic programming (DP) solution algorithm with the Bayesian Markov chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though the number of grid points on the state variable is small per solution‐estimation iteration, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the “curse of dimensionality.” We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.  相似文献   

5.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   

6.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

7.
Comparisons of learning models in repeated games have been a central preoccupation of experimental and behavioral economics over the last decade. Much of this work begins with pooled estimation of the model(s) under scrutiny. I show that in the presence of parameter heterogeneity, pooled estimation can produce a severe bias that tends to unduly favor reinforcement learning relative to belief learning. This occurs when comparisons are based on goodness of fit and when comparisons are based on the relative importance of the two kinds of learning in hybrid structural models. Even misspecified random parameter estimators can greatly reduce the bias relative to pooled estimation.  相似文献   

8.
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.  相似文献   

9.
The current methods for a reference dose (RfD) determination can be enhanced through the use of biologically-based dose-response analysis. Methods developed here utilizes information from tetrachlorodibenzo- p -dioxin (TCDD) to focus on noncancer endpoints, specifically TCDD mediated immune system alterations and enzyme induction. Dose-response analysis, using the Sigmoid-Emax (EMAX) function, is applied to multiple studies to determine consistency of response. Through the use of multiple studies and statistical comparison of parameter estimates, it was demonstrated that the slope estimates across studies were very consistent. This adds confidence to the subsequent effect dose estimates. This study also compares traditional methods of risk assessment such as the NOAEL/safety factor to a modified benchmark dose approach which is introduced here. Confidence in the estimation of an effect dose (ED10) was improved through the use of multiple datasets. This is key to adding confidence to the benchmark dose estimates. In addition, the Sigmoid-Emax function when applied to dose-response data using nonlinear regression analysis provides a significantly improved fit to data increasing confidence in parameter estimates which subsequently improve effect dose estimates.  相似文献   

10.
Physiologically-based toxicokinetic (PBTK) models are widely used to quantify whole-body kinetics of various substances. However, since they attempt to reproduce anatomical structures and physiological events, they have a high number of parameters. Their identification from kinetic data alone is often impossible, and other information about the parameters is needed to render the model identifiable. The most commonly used approach consists of independently measuring, or taking from literature sources, some of the parameters, fixing them in the kinetic model, and then performing model identification on a reduced number of less certain parameters. This results in a substantial reduction of the degrees of freedom of the model. In this study, we show that this method results in final estimates of the free parameters whose precision is overestimated. We then compared this approach with an empirical Bayes approach, which takes into account not only the mean value, but also the error associated with the independently determined parameters. Blood and breath 2H8-toluene washout curves, obtained in 17 subjects, were analyzed with a previously presented PBTK model suitable for person-specific dosimetry. Model parameters with the greatest effect on predicted levels were alveolar ventilation rate QPC, fat tissue fraction VFC, blood-air partition coefficient Kb, fraction of cardiac output to fat Qa/co and rate of extrahepatic metabolism Vmax-p. Differences in the measured and Bayesian-fitted values of QPC, VFC and Kb were significant (p < 0.05), and the precision of the fitted values Vmax-p and Qa/co went from 11 ± 5% to 75 ± 170% (NS) and from 8 ± 2% to 9 ± 2% (p < 0.05) respectively. The empirical Bayes approach did not result in less reliable parameter estimates: rather, it pointed out that the precision of parameter estimates can be overly optimistic when other parameters in the model, either directly measured or taken from literature sources, are treated as known without error. In conclusion, an empirical Bayes approach to parameter estimation resulted in a better model fit, different final parameter estimates, and more realistic parameter precisions.  相似文献   

11.
In a series of articles and a health-risk assessment report, scientists at the CIIT Hamner Institutes developed a model (CIIT model) for estimating respiratory cancer risk due to inhaled formaldehyde within a conceptual framework incorporating extensive mechanistic information and advanced computational methods at the toxicokinetic and toxicodynamic levels. Several regulatory bodies have utilized predictions from this model; on the other hand, upon detailed evaluation the California EPA has decided against doing so. In this article, we study the CIIT model to identify key biological and statistical uncertainties that need careful evaluation if such two-stage clonal expansion models are to be used for extrapolation of cancer risk from animal bioassays to human exposure. Broadly, these issues pertain to the use and interpretation of experimental labeling index and tumor data, the evaluation and biological interpretation of estimated parameters, and uncertainties in model specification, in particular that of initiated cells. We also identify key uncertainties in the scale-up of the CIIT model to humans, focusing on assumptions underlying model parameters for cell replication rates and formaldehyde-induced mutation. We discuss uncertainties in identifying parameter values in the model used to estimate and extrapolate DNA protein cross-link levels. The authors of the CIIT modeling endeavor characterized their human risk estimates as "conservative in the face of modeling uncertainties." The uncertainties discussed in this article indicate that such a claim is premature.  相似文献   

12.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

13.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

14.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

15.
Call an economic model incomplete if it does not generate a probabilistic prediction even given knowledge of all parameter values. We propose a method of inference about unknown parameters for such models that is robust to heterogeneity and dependence of unknown form. The key is a Central Limit Theorem for belief functions; robust confidence regions are then constructed in a fashion paralleling the classical approach. Monte Carlo simulations support tractability of the method and demonstrate its enhanced robustness relative to existing methods.  相似文献   

16.
Multistage clonal growth models are of interest for cancer risk assessment because they can explicitly incorporate data on cell replication. Both approximate and exact formulations of the two stage growth model have been described. The exact solution considers the conditional probability of tumors arising in previously tumor-free animals; the approximate solution estimates total probability of tumor formation. The exact solution is much more computationally intensive when time-dependent cell growth parameters are included. The approximate solution deviates from the exact solution at high incidences and probabilities of tumor. This report describes a computationally tractable,'improved approximation'to the exact solution. Our improved approximation includes a correction term to adjust the unconditional expectation of intermediate cells based on the time history of formation of intermediate cells by mutation of normal cells (recruitment) or by cell division in the intermediate cell population (expansion). The improved approximation provided a much better match to the exact solution than the approximate solution for a wide range of parameter values. The correction term also appears to provide insight into the biological factors that contribute to the variance of the expectation for the number of intermediate cells over time.  相似文献   

17.
We present a decision support approach for a network structured stochastic multi-objective index tracking problem in this paper. Due to the non-convexity of this problem, the developed network is modeled as a Stochastic Mixed Integer Linear Program (SMILP). We also propose an optimization-based approach to scenario generation to protect against the risk of parameter estimation for the SMILP. Progressive Hedging (PH), an improved Lagrangian scheme, is designed to decompose the general model into scenario-based sub-problems. Furthermore, we innovatively combine tabu search and the sub-gradient method into PH to enhance the tracking capabilities of the model. We show the robustness of the algorithm through effectively solving a large number of numerical instances.  相似文献   

18.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

19.
Rating models are widely used by credit institutions to obtain estimates for the probabilities of default for their clients (firms, organizations, individuals) and to assess the risk of credit portfolios. Several statistical and data mining methods are used to develop such models. In this article, the potential of an outranking multicriteria decision‐aiding approach is explored. An evolutionary algorithm is used to fit a credit rating model on the basis of the ELimination Et Choix Traduisant la REalité trichotomique method. The methodology is applied to a large sample of Greek firms. The results indicate that outranking models are well suited to credit rating, providing good classification results and useful insight on the relative importance of the evaluation criteria.  相似文献   

20.
In chemical and microbial risk assessments, risk assessors fit dose‐response models to high‐dose data and extrapolate downward to risk levels in the range of 1–10%. Although multiple dose‐response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose‐response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号