首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 457 毫秒
1.
This paper reexamines the scaling approaches used in cancer risk assessment and proposes a more precise body weight scaling factor. Two approaches are conventionally used in scaling exposure and dose from experimental animals to man: body weight scaling (used by FDA) and surface area scaling (BW0.67--used by EPA). This paper reanalyzes the Freireich et al. (1966) study of the maximum tolerated dose (MTD) of 14 anticancer agents in mice, rats, dogs, monkeys, and humans, the dataset most commonly cited as justification for surface area extrapolation. This examination was augmented with an analysis of a similar dataset by Schein et al. (1970) of the MTD of 13 additional chemotherapy agents. The reanalysis shows that BW0.75 is a more appropriate scaling factor for the 27 direct-acting compounds in this dataset.  相似文献   

2.
Three methods (multiplicative, additive, and allometric) were developed to extrapolate physiological model parameter distributions across species, specifically from rats to humans. In the multiplicative approach, the rat model parameters are multiplied by the ratio of the mean values between humans and rats. Additive scaling of the distributions is denned by adding the difference between the average human value and the average rat value to each rat value. Finally, allometric scaling relies on established extrapolation relationships using power functions of body weight. A physiologically-based pharmacokinetic model was fitted independently to rat and human benzene disposition data. Human model parameters obtained by extrapolation and by fitting were used to predict the total bone marrow exposure to benzene and the quantity of metabolites produced in bone marrow. We found that extrapolations poorly predict the human data relative to the human model. In addition, the prediction performance depends largely on the quantity of interest. The extrapolated models underpredict bone marrow exposure to benzene relative to the human model. Yet, predictions of the quantity of metabolite produced in bone marrow are closer to the human model predictions. These results indicate that the multiplicative and allometric techniques were able to extrapolate the model parameter distributions, but also that rats do not provide a good kinetic model of benzene disposition in humans.  相似文献   

3.
Parodi et al. (1) and Zeise et al. (2) found a surprising statistical correlation (or association) between acute toxicity and carcinogenic potency. In order to shed light on the questions of whether or not it is a causal correlation, and whether or not it is a statistical or tautological artifact, we have compared the correlations for the NCI/NTP data set with those for chemicals not in this set. Carcinogenic potencies were taken from the Gold et al. database. We find a weak correlation with an average value of TD50/LD50= 0.04 for the non-NCI data set, compared with TD50/LD50= 0.15 for the NCI data set. We conclude that it is not easy to distinguish types of carcinogens on the basis of whether or not they are acutely toxic.  相似文献   

4.
Crouch and Wilson demonstrated a strong correlation between carcinogenic potencies in rats and mice, supporting the extrapolation from mouse to man. Bernstein et al. , however, show that the observed correlation is mainly a statistical artifact of bioassay design. Crouch et al. have a comeback. This paper will review the arguments and present some new data. The correlation is largely (but not totally) tautological, confirming results in Bernstein et al.  相似文献   

5.
In the broadcasting of ad hoc wireless networks, energy conservation is a critical issue. Three heuristic algorithms were proposed in Wieselthier et al.(2001) for finding approximate minimum-energy broadcast routings: MST(minimum spanning tree), SPT(shortest-path tree), and BIP(broadcasting incremental power). Wan et al.(2001) characterized their performance in terms of approximation ratios. This paper points out some mistakes in the result of Wan et al.(2001), and proves that the upper bound of sum of squares of lengths of the edges in Euclidean MST in unit disk can be improved to 10.86, thus improves the approximation ratios of MST and BIP algorithm.Supported by Natural Science Foundation of China (60223004, 603210022)  相似文献   

6.
Two primal-dual affine scaling algorithms for linear programming are extended to semidefinite programming. The algorithms do not require (nearly) centered starting solutions, and can be initiated with any primal-dual feasible solution. The first algorithm is the Dikin-type affine scaling method of Jansen et al. (1993b) and the second the classical affine scaling method of Monteiro et al. (1990). The extension of the former has a worst-case complexity bound of O(0nL) iterations, where 0 is a measure of centrality of the the starting solution, and the latter a bound of O(0nL2) iterations.  相似文献   

7.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   

8.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

9.
The need to identify toxicologically equivalent doses across different species is a major issue in toxicology and risk assessment. In this article, we investigate interspecies scaling based on the allometric equation applied to the single, oral LD 50 data previously analyzed by Rhomberg and Wolff.( 1 ) We focus on the statistical approach, namely, regression analysis of the mentioned data. In contrast to Rhomberg and Wolff's analysis of species pairs, we perform an overall analysis based on the whole data set. From our study it follows that if one assumes one single scaling rule for all species and substances in the data set, then β= 1 is the most natural choice among a set of candidates known in the literature. In fact, we obtain quite narrow confidence intervals for this parameter. However, the estimate of the variance in the model is relatively high, resulting in rather wide prediction intervals.  相似文献   

10.
Adrian Kent 《Risk analysis》2004,24(1):157-168
Recent articles by Busza et al. (BJSW) and Dar et al. (DDH) argue that astrophysical data can be used to establish small bounds on the risk of a "killer strangelet" catastrophe scenario in the RHIC and ALICE collider experiments. The case for the safety of the experiments set out by BJSW does not rely solely on these bounds, but on theoretical arguments, which BJSW find sufficiently compelling to firmly exclude any possibility of catastrophe. Nonetheless, DDH and other commentators (initially including BJSW) suggested that these empirical bounds alone do give sufficient reassurance. This seems unsupportable when the bounds are expressed in terms of expectation value-a good measure, according to standard risk analysis arguments. For example, DDH's main bound, p(catastrophe) < 2 x 10(-8), implies only that the expectation value of the number of deaths is bounded by 120; BJSW's most conservative bound implies the expectation value of the number of deaths is bounded by 60,000. This article reappraises the DDH and BJSW risk bounds by comparing risk policy in other areas. For example, it is noted that, even if highly risk-tolerant assumptions are made and no value is placed on the lives of future generations, a catastrophe risk no higher than approximately 10(-15) per year would be required for consistency with established policy for radiation hazard risk minimization. Allowing for risk aversion and for future lives, a respectable case can be made for requiring a bound many orders of magnitude smaller. In summary, the costs of small risks of catastrophe have been significantly underestimated by BJSW (initially), by DDH, and by other commentators. Future policy on catastrophe risks would be more rational, and more deserving of public trust, if acceptable risk bounds were generally agreed upon ahead of time and if serious research on whether those bounds could indeed be guaranteed was carried out well in advance of any hypothetically risky experiment, with the relevant debates involving experts with no stake in the experiments under consideration.  相似文献   

11.
Rhomberg  Lorenz R.  Wolff  Scott K. 《Risk analysis》1998,18(6):741-753
The scaling of administered doses to achieve equal degrees of toxic effect in different species has been relatively poorly examined for noncancer toxicity, either empirically or theoretically. We investigate empirical patterns in the correspondence of single oral dose LD, values across several mammalian species for a large number of chemicals based on data reported in the RTECSQ database maintained by the National Institute for Occupational Safety and Health. We find a good correspondence of LD, values across species when the dose levels are expressed in terms of mgadministered per kg of body mass. Our findings contrast with earlier analyses that support scaling doses by the 3/4-power of body mass to achieve equal subacute toxicity of antineoplastic agents. We suggest that, especially for severe toxicity, single- and repeated-dosing regimes may have different cross-species scaling properties, as they may depend on standing levels of defenses and rate of regeneration of defenses, respectively.  相似文献   

12.
In 2014, Desormeaux et al. (Discrete Math 319:15–23, 2014) proved a relationship between the annihilation number and 2-domination number of a tree. In this note, we provide a family of bounds for the 2-domination number of a tree based on the amount of vertices of small degree. This family of bounds extends current bounds on the 2-domination number of a tree, and provides an alternative proof for the relationship between the annihilation number and the 2-domination number of a tree that was shown by Desormeaux et al.  相似文献   

13.
Kevin M. Crofton 《Risk analysis》2012,32(10):1784-1797
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose‐dependent interaction. However, the corresponding likelihood‐ratio‐based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds‐optimal second‐stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds‐optimal second‐stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice.  相似文献   

14.
Interspecies scaling factors (ISFs) are numbers used to adjust the potency factor (for example, the q1* for carcinogens or reference doses for compounds eliciting other toxic endpoints) determined in experimental animals to account for expected differences in potency between test animals and people. ISFs have been developed for both cancer and non-cancer risk assessments in response to a common issue: toxicologists often determine adverse effects of chemicals in test animals and then they, or more commonly risk assessors and risk managers, have to draw inferences about what these observations mean for the human population. This perspective briefly reviews the development of ISFs and their applications in health risk assessments over the past 20 years, examining the impact of pharmacokinetic principles in altering current perceptions of the ISFs applied in these health risk assessments, and assessing future directions in applying both pharmacokinetic and pharmacodynamic principles for developing ISFs.  相似文献   

15.
Developmental anomalies induced by toxic chemicals may be identified using laboratory experiments with rats, mice or rabbits. Multinomial responses of fetuses from the same mother are often positively correlated, resulting in overdispersion relative to multinomial variation. In this article, a simple data transformation based on the concept of generalized design effects due to Rao-Scott is proposed for dose-response modeling of developmental toxicity. After scaling the original multinomial data using the average design effect, standard methods for analysis of uncorrected multinomial data can be applied. Benchmark doses derived using this approach are comparable to those obtained using generalized estimating equations with an extended Dirichlet-trinomial covariance function to describe the dispersion of the original data. This empirical agreement, coupled with a large sample theoretical justification of the Rao-Scott transformation, confirms the applicability of the statistical methods proposed in this article for developmental toxicity risk assessment.  相似文献   

16.
Methylene chloride has been shown to be a lung and liver carcinogen in the mouse; yet, the current epidemiologic data show no adverse health effects associated with chronic exposure to this compound. Hearne et al. have compared the results of a large mortality study on occupational exposure to methylene chloride to the human risk predictions based on the rodent bioassay to point out the inconsistency between the animal toxicologic and human epidemiologic data. The maximum number of lung and liver cancers predicted due to methylene chloride exposure based on the rodent bioassay data was 24 compared to 14 deaths from these cancers actually observed in the Hearne et al. epidemiology study. We assess the minimum risk detectable by the human study in order to calculate the upperbound potency of methylene chloride and compare it to the potency derived from the bioassay data. Results from the epidemiology study imply an upperbound potency of 1.5 x 10(-2) per ppm, compared to 1.4 x 10(-2) per ppm calculated using the most conservative analysis of the animal data. We conclude that the negative epidemiology study of Hearne et al. is not sufficiently powerful to show that the risk is inconsistent with the human risk estimated by modeling the rodent bioassay data. Specifically, the doses to which the workers were exposed, the population studied, and the latency period were not adequate to determine that the risks are outside the bounds of the risk estimates predicted by low-dose modeling of the animal data.  相似文献   

17.
The causal structure of the determinants of trust in industry, government, and citizen's groups in Japan was investigated on the basis of Peters et al. (1997). A preliminary survey of the adequacy of the hypotheses proposed by Peters et al. in Japan was made. A set of hypothesized determinants of trust in Japan was proposed based on results of the preliminary survey. Questionnaires concerning perceptions of trust in the organizations and the proposed determinants were sent by mail to residents in the area where environmental risk problems had emerged. The data were analyzed by covariance structure analysis to construct models of trust in industry, government, and citizen's groups. As a result, "openness and honesty," "concern and care," "competence," "people's concern with risks," and "consensual values" were found to be factors directly determining trust. Suggested in particular is that "openness" of an organization is not attained merely by information disclosure, but also by bi-directional communication with the people. Moreover, these models include "consensual values," which do not appear in the model proposed by Peters et al.  相似文献   

18.
本文应用Berry等(1995)[2]提出的离散选择需求模型和差异产品的伯川德竞争模型,并利用淘宝网上的交易数据,对声誉、消费者保障计划、保修服务及信息披露四种信号策略在网上交易中的作用进行了实证研究。需求估计结果表明消费者保障计划中的"7天无理由退换货"计划以及保修服务可以作为产品质量的信号;而在其他有效信号策略存在的情况下,消费者保障计划中的"先行赔付"计划以及卖家声誉作为质量信号的作用则被削弱。成本分析表明,信息披露虽然可以提高消费者购买的可能性,但是由于发出信号的成本太低,容易被低质量卖家所模仿,因此并不能作为有效的质量信号。本文首次从供给和需求两个方面分析了信号机制在信息不对称情况下的效应及其作用机制。  相似文献   

19.
Conventional tests for composite hypotheses in minimum distance models can be unreliable when the relationship between the structural and reduced‐form parameters is highly nonlinear. Such nonlinearity may arise for a variety of reasons, including weak identification. In this note, we begin by studying the problem of testing a “curved null” in a finite‐sample Gaussian model. Using the curvature of the model, we develop new finite‐sample bounds on the distribution of minimum‐distance statistics. These bounds allow us to construct tests for composite hypotheses which are uniformly asymptotically valid over a large class of data generating processes and structural models.  相似文献   

20.
Ames et al. have proposed a new model for evaluating carcinogenic hazards in the environment. They advocate ranking possible carcinogens on the basis of the TD50, the estimated dose at which 50% of the test animals would get tumors, and extrapolating that ranking to all other doses. We argue that implicit in this methodology is a simplistic and inappropriate statistical model. All carcinogens are assumed to act similarly and to have dose-response curves of the same shape that differ only in the value of one parameter. We show by counterexample that the rank order of cancer potencies for two chemicals can change over a reasonable range of doses. Ames et al.'s use of these TD50 ranks to compare the hazards from low level exposures to contaminants in our food and environment is wholly inappropriate and inaccurate. Their dismissal of public health concern for environmental exposures, in general, based on these comparisons, is not supported by the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号