首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Quantitative Risk Assessment for Developmental Neurotoxic Effects   总被引:4,自引:0,他引:4  
Developmental neurotoxicity concerns the adverse health effects of exogenous agents acting on neurodevelopment. Because human brain development is a delicate process involving many cellular events, the developing fetus is rather susceptible to compounds that can alter the structure and function of the brain. Today, there is clear evidence that early exposure to many neurotoxicants can severely damage the developing nervous system. Although in recent years, there has been much attention given to model development and risk assessment procedures for developmental toxicants, the area of developmental neurotoxicity has been largely ignored. Here, we consider the problem of risk estimation for developmental neurotoxicants from animal bioassay data. Since most responses from developmental neurotoxicity experiments are nonquantal in nature, an adverse health effect will be defined as a response that occurs with very small probability in unexposed animals. Using a two-stage hierarchical normal dose-response model, upper confidence limits on the excess risk due to a given level of added exposure are derived. Equivalently, the model is used to obtain lower confidence limits on dose for a small negligible level of risk. Our method is based on the asymptotic distribution of the likelihood ratio statistic (cf. Crump, 1995). An example is used to provide further illustration.  相似文献   

2.
L Kopylev  J Fox 《Risk analysis》2009,29(1):18-25
It is well known that, under appropriate regularity conditions, the asymptotic distribution for the likelihood ratio statistic is χ2. This result is used in EPA's benchmark dose software to obtain a lower confidence bound (BMDL) for the benchmark dose (BMD) by the profile likelihood method. Recently, based on work by Self and Liang, it has been demonstrated that the asymptotic distribution of the likelihood ratio remains the same if some of the regularity conditions are violated, that is, when true values of some nuisance parameters are on the boundary. That is often the situation for BMD analysis of cancer bioassay data. In this article, we study by simulation the coverage of one- and two-sided confidence intervals for BMD when some of the model parameters have true values on the boundary of a parameter space. Fortunately, because two-sided confidence intervals (size 1–2α) have coverage close to the nominal level when there are 50 animals in each group, the coverage of nominal 1−α one-sided intervals is bounded between roughly 1–2α and 1. In many of the simulation scenarios with a nominal one-sided confidence level of 95%, that is, α= 0.05, coverage of the BMDL was close to 1, but for some scenarios coverage was close to 90%, both for a group size of 50 animals and asymptotically (group size 100,000). Another important observation is that when the true parameter is below the boundary, as with the shape parameter of a log-logistic model, the coverage of BMDL in a constrained model (a case of model misspecification not uncommon in BMDS analyses) may be very small and even approach 0 asymptotically. We also discuss that whenever profile likelihood is used for one-sided tests, the Self and Liang methodology is needed to derive the correct asymptotic distribution.  相似文献   

3.
This paper analyzes the dealership credit limit problem in terms of the valuation of a Markov process of cash flows with sequential credit decisions over an infinite planning horizon. The formulation distinguishes between the upper bound on credit applicable at the account formation stage and the upper bound applicable to periodic reorders. The result is a closed form solution to the problem which serves as a criterion function for approving or denying credit on a customer-by-customer basis. Data for a sample of manufacturing firms are employed to estimate typical ranges for criterion function parameters. Upper bounds on credit limits are then calculated and graphically presented for median parameter values as well as for values at the 5th and 95th percentiles for the sample data. Finally, an empirical study is conducted of actual trade credit extended by firms. The results support the hypothesis that the variables in the decision model are important determinants of the amount of trade credit outstanding.  相似文献   

4.
针对具有风险厌恶的零售商,建立了权衡期望利润和条件风险值(CVaR)的均值-风险库存优化模型,给出了离散需求分布不确定条件下能实现帕累托最优但具有较高保守性和非帕累托最优但具有较低保守性的两种鲁棒对应。针对不确定需求分布,在仅知历史需求样本数据情况下,应用统计推断理论构建了满足一定置信水平的基于似然估计的需求概率分布不确定集。在此基础上,运用拉格朗日对偶理论,将上述两种鲁棒对应模型转化为易于求解的凹优化问题,并证明了其与原问题的等价性。最后,针对实际案例进行了数值计算,分析了不同系统参数和样本规模对零售商最优库存决策及其运作绩效的影响,并给出了零售商期望利润和条件风险值两个目标权衡的帕累托有效前沿。结果表明,采用基于似然估计的鲁棒优化方法得到的零售商库存策略具有良好鲁棒性,能够有效抑制需求分布不确定性对零售商库存绩效的影响。而且,历史需求样本规模越大,鲁棒库存策略下的零售商运作绩效越接近最优情况。进一步,通过对比发现,两种鲁棒对应模型虽然保守性不同,但在最终库存策略上保持一致。  相似文献   

5.
Upper Confidence Limits on Excess Risk for Quantitative Responses   总被引:8,自引:0,他引:8  
The definition and observation of clear-cut adverse health effects for continuous (quantitative) responses, such as altered body weights or organ weights, are difficult propositions. Thus, methods of risk assessment commonly used for binary (quantal) toxic responses such as cancer are not directly applicable. In this paper, two methods for calculating upper confidence limits on excess risk for quantitative toxic effects are proposed, based on a particular definition of an adverse quantitative response. The methods are illustrated with data from a dose-response study, and their performance is evaluated with a Monte Carlo simulation study.  相似文献   

6.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

7.
We analyze use of a quasi‐likelihood ratio statistic for a mixture model to test the null hypothesis of one regime versus the alternative of two regimes in a Markov regime‐switching context. This test exploits mixture properties implied by the regime‐switching process, but ignores certain implied serial correlation properties. When formulated in the natural way, the setting is nonstandard, involving nuisance parameters on the boundary of the parameter space, nuisance parameters identified only under the alternative, or approximations using derivatives higher than second order. We exploit recent advances by Andrews (2001) and contribute to the literature by extending the scope of mixture models, obtaining asymptotic null distributions different from those in the literature. We further provide critical values for popular models or bounds for tail probabilities that are useful in constructing conservative critical values for regime‐switching tests. We compare the size and power of our statistics to other useful tests for regime switching via Monte Carlo methods and find relatively good performance. We apply our methods to reexamine the classic cartel study of Porter (1983) and reaffirm Porter's findings.  相似文献   

8.
The application of stochastic heuristic, like tabu search or simulated annealing, to address hard discrete optimization problems has been an important advance for efficiently obtaining good solutions in a reasonable amount of computing time. A challenge when applying such heuristics is assessing when a particular set of parameter values yields better performance compared to other such sets of parameter values. For example, it can be difficult to determine the optimal mix of memory types to incorporate into tabu search. This in turn prompts users to undertake a trial and error phase to determine the best combination of parameter settings for the problem under study. Moreover, for a given problem instance, one set of heuristic parameter settings may yield a better solution than another set of parameters, for a given initial solution. However, the performance of this heuristic on this instance for a single heuristic execution is not sufficient to assert that the first set of parameter settings will always produce superior results than the second set of parameters, for all initial solutions.  相似文献   

9.
An autoregressive model with Markov regime‐switching is analyzed that reflects on the properties of the quasi‐likelihood ratio test developed by Cho and White (2007). For such a model, we show that consistency of the quasi‐maximum likelihood estimator for the population parameter values, on which consistency of the test is based, does not hold. We describe a condition that ensures consistency of the estimator and discuss the consistency of the test in the absence of consistency of the estimator.  相似文献   

10.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

11.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

12.
The uncapacitated single allocation hub location problem (USAHLP), with the hub-and-spoke network structure, is a decision problem in regard to the number of hubs and location–allocation. In a pure hub-and-spoke network, all hubs, which act as switching points for internodal flows, are interconnected and none of the non-hubs (i.e., spokes) are directly connected. The key factors for designing a successful hub-and-spoke network are to determine the optimal number of hubs, to properly locate hubs, and to allocate the non-hubs to the hubs. In this paper two approaches to determine the upper bound for the number of hubs along with a hybrid heuristic based on the simulated annealing method, tabu list, and improvement procedures are proposed to resolve the USAHLP. Computational experiences indicate that by applying the derived upper bound for the number of hubs the proposed heuristic is capable of obtaining optimal solutions for all small-scaled problems very efficiently. Computational results also demonstrate that the proposed hybrid heuristic outperforms a genetic algorithm and a simulated annealing method in solving USAHLP.  相似文献   

13.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

14.
Due to their importance in industry and mathematical complexity, dynamic demand lot-sizing problems are frequently studied. In this article, we consider coordinated lot-size problems, their variants and exact and heuristic solutions approaches. The problem class provides a comprehensive approach for representing single and multiple items, coordinated and uncoordinated setup cost structures, and capacitated and uncapacitated problem characteristics. While efficient solution approaches have eluded researchers, recent advances in problem formulation and algorithms are enabling large-scale problems to be effectively solved. This paper updates a 1988 review of the coordinated lot-sizing problem and complements recent reviews on the single-item lot-sizing problem and the capacitated lot-sizing problem. It provides a state-of-the-art review of the research and future research projections. It is a starting point for anyone conducting research in the deterministic dynamic demand lot-sizing field.  相似文献   

15.
This paper is concerned with the Bayesian estimation of nonlinear stochastic differential equations when observations are discretely sampled. The estimation framework relies on the introduction of latent auxiliary data to complete the missing diffusion between each pair of measurements. Tuned Markov chain Monte Carlo (MCMC) methods based on the Metropolis‐Hastings algorithm, in conjunction with the Euler‐Maruyama discretization scheme, are used to sample the posterior distribution of the latent data and the model parameters. Techniques for computing the likelihood function, the marginal likelihood, and diagnostic measures (all based on the MCMC output) are developed. Examples using simulated and real data are presented and discussed in detail.  相似文献   

16.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

17.
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model, based on the Gaussian likelihood conditional on initial values. We give conditions on the parameters such that the process Xt is fractional of order d and cofractional of order db; that is, there exist vectors β for which βXt is fractional of order db and no other fractionality order is possible. For b=1, the model nests the I(d−1) vector autoregressive model. We define the statistical model by 0 < bd, but conduct inference when the true values satisfy 0d0b0<1/2 and b0≠1/2, for which β0Xt is (asymptotically) a stationary process. Our main technical contribution is the proof of consistency of the maximum likelihood estimators. To this end, we prove weak convergence of the conditional likelihood as a continuous stochastic process in the parameters when errors are independent and identically distributed with suitable moment conditions and initial values are bounded. Because the limit is deterministic, this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of is mixed Gaussian, while for the remaining parameters it is Gaussian. The limit distribution of the likelihood ratio test for cointegration rank is a functional of fractional Brownian motion of type II. If b0<1/2, all limit distributions are Gaussian or chi‐squared. We derive similar results for the model with d = b, allowing for a constant term.  相似文献   

18.
BAB算法中集成CPT求解job-shop调度问题   总被引:2,自引:0,他引:2       下载免费PDF全文
CSP(constraintsatisfactoryproblem)的优势在于能够处理复杂约束,获得一个满足约束的解,但难以保证解的质量.OR(operationresearch)的优点是获得最优解或近优解,但它求解复杂约束的优化问题非常困难.CPT(constraintpropagationtechnique)是CSP的主要搜索技术,BAB(branch_and_bound)是OR常用的优化算法.提出了一种将CPT集成于BAB中的混合算法,从一个新的角度解决具有一般性与挑战性的job shop调度问题.其主要特点是,通过在BAB算法中嵌入动态可调的时间窗口约束和加强一致性CPT搜索方法,融合BAB的优化能力和CPT处理复杂约束的能力,提高BAB的优化性能及实际应用能力.实验结果令人满意,证明了算法的有效性.  相似文献   

19.
Evacuating residents out of affected areas is an important strategy for mitigating the impact of natural disasters. However, the resulting abrupt increase in the travel demand during evacuation causes severe congestions across the transportation system, which thereby interrupts other commuters' regular activities. In this article, a bilevel mathematical optimization model is formulated to address this issue, and our research objective is to maximize the transportation system resilience and restore its performance through two network reconfiguration schemes: contraflow (also referred to as lane reversal) and crossing elimination at intersections. Mathematical models are developed to represent the two reconfiguration schemes and characterize the interactions between traffic operators and passengers. Specifically, traffic operators act as leaders to determine the optimal system reconfiguration to minimize the total travel time for all the users (both evacuees and regular commuters), while passengers act as followers by freely choosing the path with the minimum travel time, which eventually converges to a user equilibrium state. For each given network reconfiguration, the lower‐level problem is formulated as a traffic assignment problem (TAP) where each user tries to minimize his/her own travel time. To tackle the lower‐level optimization problem, a gradient projection method is leveraged to shift the flow from other nonshortest paths to the shortest path between each origin–destination pair, eventually converging to the user equilibrium traffic assignment. The upper‐level problem is formulated as a constrained discrete optimization problem, and a probabilistic solution discovery algorithm is used to obtain the near‐optimal solution. Two numerical examples are used to demonstrate the effectiveness of the proposed method in restoring the traffic system performance.  相似文献   

20.
We propose a more generalized version of the secretary problem, called the group interview problem, in which each group contains several alternatives and each group of alternatives is presented and evaluated sequentially over time. Using the assumptions corresponding to the classical secretary problem, we derive an optimal selection strategy which maximizes the probability of winning or selecting the single best choice in a given sequence of groups. We further address the problem of choosing at the beginning of the evaluation process a sequence of groups to maximize the winning probability. Because of formidable computational requirements to obtain an optimal solution to this sequencing problem, we then develop a heuristic algorithm based on several properties inherent in an optimal selection strategy. The heuristic procedure is evaluated experimentally using Monte Carlo simulation and is shown to be effective in obtaining near-optimal (within 5 percent) solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号