首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

2.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

3.
The application of quantitative microbial risk assessments (QMRAs) to understand and mitigate risks associated with norovirus is increasingly common as there is a high frequency of outbreaks worldwide. A key component of QMRA is the dose–response analysis, which is the mathematical characterization of the association between dose and outcome. For Norovirus, multiple dose–response models are available that assume either a disaggregated or an aggregated intake dose. This work reviewed the dose–response models currently used in QMRA, and compared predicted risks from waterborne exposures (recreational and drinking) using all available dose–response models. The results found that the majority of published QMRAs of norovirus use the 1F1 hypergeometric dose–response model with α = 0.04, β = 0.055. This dose–response model predicted relatively high risk estimates compared to other dose–response models for doses in the range of 1–1,000 genomic equivalent copies. The difference in predicted risk among dose–response models was largest for small doses, which has implications for drinking water QMRAs where the concentration of norovirus is low. Based on the review, a set of best practices was proposed to encourage the careful consideration and reporting of important assumptions in the selection and use of dose–response models in QMRA of norovirus. Finally, in the absence of one best norovirus dose–response model, multiple models should be used to provide a range of predicted outcomes for probability of infection.  相似文献   

4.
Giardia is a zoonotic gastrointestinal parasite responsible for a substantial global public health burden, and quantitative microbial risk assessment (QMRA) is often used to forecast and manage this burden. QMRA requires dose–response models to extrapolate available dose–response data, but the existing model for Giardia ignores valuable dose–response information, particularly data from several well-documented waterborne outbreaks of giardiasis. The current study updates Giardia dose–response modeling by synthesizing all available data from outbreaks and experimental studies using a Bayesian random effects dose–response model. For outbreaks, mean doses (D) and the degree of spatial and temporal aggregation among cysts were estimated using exposure assessment implemented via two-dimensional Monte Carlo simulation, while potential overreporting of outbreak cases was handled using published overreporting factors and censored binomial regression. Parameter estimation was by Markov chain Monte Carlo simulation and indicated that a typical exponential dose–response parameter for Giardia is r = 1.6 × 10−2 [3.7 × 10−3, 6.2 × 10−2] (posterior median [95% credible interval]), while a typical morbidity ratio is m = 3.8 × 10−1 [2.3 × 10−1, 5.5 × 10−1]. Corresponding (logistic-scale) variance components were σr = 5.2 × 10−1 [1.1 × 10−1, 9.6 × 10−1] and σm = 9.3 × 10−1 [7.0 × 10−2, 2.8 × 100], indicating substantial variation in the Giardia dose–response relationship. Compared to the existing Giardia dose–response model, the current study provides more representative estimation of uncertainty in r and novel quantification of its natural variability. Several options for incorporating variability in r (and m) into QMRA predictions are discussed, including incorporation via Monte Carlo simulation as well as evaluation of the current study's model using the approximate beta-Poisson.  相似文献   

5.
6.
To prevent and control foodborne diseases, there is a fundamental need to identify the foods that are most likely to cause illness. The goal of this study was to rank 25 commonly consumed food products associated with Salmonella enterica contamination in the Central Region of Mexico. A multicriteria decision analysis (MCDA) framework was developed to obtain an S. enterica risk score for each food product based on four criteria: probability of exposure to S. enterica through domestic food consumption (Se); S. enterica growth potential during home storage (Sg); per capita consumption (Pcc); and food attribution of S. enterica outbreak (So). Risk scores were calculated by the equation Se*W1+Sg*W2+Pcc*W3+So*W4, where each criterion was assigned a normalized value (1–5) and the relative weights (W) were defined by 22 experts’ opinion. Se had the largest effect on the risk score being the criterion with the highest weight (35%; IC95% 20%–60%), followed by So (24%; 5%–50%), Sg (23%; 10%–40%), and Pcc (18%; 10%–35%). The results identified chicken (4.4 ± 0.6), pork (4.2 ± 0.6), and beef (4.2 ± 0.5) as the highest risk foods, followed by seed fruits (3.6 ± 0.5), tropical fruits (3.4 ± 0.4), and dried fruits and nuts (3.4 ± 0.5), while the food products with the lowest risk were yogurt (2.1 ± 0.3), chorizo (2.1 ± 0.4), and cream (2.0 ± 0.3). Approaches with expert-based weighting and equal weighting showed good correlation (R= 0.96) and did not show significant differences among the ranking order in the top 20 tier. This study can help risk managers select interventions and develop targeted surveillance programs against S. enterica in high-risk food products.  相似文献   

7.
In telecommunication networks design the problem of obtaining optimal (arc or node) disjoint paths, for increasing network reliability, is extremely important. The problem of calculating k c disjoint paths from s to t (two distinct nodes), in a network with k c different (arbitrary) costs on every arc such that the total cost of the paths is minimised, is NP-complete even for k c =2. When k c =2 these networks are usually designated as dual arc cost networks.  相似文献   

8.
Charles N. Haas 《Risk analysis》2011,31(10):1576-1596
Human Brucellosis is one of the most common zoonotic diseases worldwide. Disease transmission often occurs through the handling of domestic livestock, as well as ingestion of unpasteurized milk and cheese, but can have enhanced infectivity if aerosolized. Because there is no human vaccine available, rising concerns about the threat of Brucellosis to human health and its inclusion in the Center for Disease Control's Category B Bioterrorism/Select Agent List make a better understanding of the dose‐response relationship of this microbe necessary. Through an extensive peer‐reviewed literature search, candidate dose‐response data were appraised so as to surpass certain standards for quality. The statistical programming language, “R,” was used to compute the maximum likelihood estimation to fit two models, the exponential and the approximate beta‐Poisson (widely used for quantitative risk assessment) to dose‐response data. Dose‐response models were generated for prevalent species of Brucella: Br. suis, Br. melitensis, and Br. abortus. Dose‐response models were created for aerosolized Br. suis exposure to guinea pigs from pooled studies. A parallel model for guinea pigs inoculated through both aerosol and subcutaneous routes with Br. melitensis showed that the median infectious dose corresponded to a 30 colony‐forming units (CFU) dose of Br. suis, much less than the N50 dose of about 94 CFU for Br. melitensis organisms. When Br. melitensis was tested subcutaneously on mice, the N50 dose was higher, 1,840 CFU. A dose‐response model was constructed from pooled data for mice, rhesus macaques, and humans inoculated through three routes (subcutaneously/aerosol/intradermally) with Br. melitensis.  相似文献   

9.
Life cycle assessment (LCA) is a framework for comparing products according to their total estimated environmental impact, summed over all chemical emissions and activities associated with a product at all stages in its life cycle (from raw material acquisition, manufacturing, use, to final disposal). For each chemical involved, the exposure associated with the mass released into the environment, integrated over time and space, is multiplied by a toxicological measure to estimate the likelihood of effects and their potential consequences. In this article, we explore the use of quantitative methods drawn from conventional single-chemical regulatory risk assessments to create a procedure for the estimation of the cancer effect measure in the impact phase of LCA. The approach is based on the maximum likelihood estimate of the effect dose inducing a 10% response over background, ED10, and default linear low-dose extrapolation using the slope betaED10 (0.1/ED10). The calculated effects may correspond to residual risks below current regulatory compliance requirements that occur over multiple generations and at multiple locations; but at the very least they represent a "using up" of some portion of the human population's ability to accommodate emissions. Preliminary comparisons are performed with existing measures, such as the U.S. Environmental Protection Agency's (U.S. EPA's) slope factor measure q1*. By analyzing bioassay data for 44 chemicals drawn from the EPA's Integrated Risk Information System (IRIS) database, we explore estimating ED10 from more readily available information such as the median tumor dose rate TD50 and the median single lethal dose LD50. Based on the TD50, we then estimate the ED10 for more than 600 chemicals. Differences in potential consequences, or severity, are addressed by combining betaED10 with the measure disability adjusted life years per affected person, DALYp. Most of the variation among chemicals for cancer effects is found to be due to differences in the slope factors (betaED10) ranging from 10(-4) up to 10(4) (risk of cancer/mg/kg-day).  相似文献   

10.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose–response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose–response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose–response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose–response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose–response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

11.
The National Weather Service has adopted warning polygons that more specifically indicate the risk area than its previous county‐wide warnings. However, these polygons are not defined in terms of numerical strike probabilities (ps). To better understand people's interpretations of warning polygons, 167 participants were shown 23 hypothetical scenarios in one of three information conditions—polygon‐only (Condition A), polygon + tornadic storm cell (Condition B), and polygon + tornadic storm cell + flanking nontornadic storm cells (Condition C). Participants judged each polygon's ps and reported the likelihood of taking nine different response actions. The polygon‐only condition replicated the results of previous studies; ps was highest at the polygon's centroid and declined in all directions from there. The two conditions displaying storm cells differed from the polygon‐only condition only in having ps just as high at the polygon's edge nearest the storm cell as at its centroid. Overall, ps values were positively correlated with expectations of continuing normal activities, seeking information from social sources, seeking shelter, and evacuating by car. These results indicate that participants make more appropriate ps judgments when polygons are presented in their natural context of radar displays than when they are presented in isolation. However, the fact that ps judgments had moderately positive correlations with both sheltering (a generally appropriate response) and evacuation (a generally inappropriate response) suggests that experiment participants experience the same ambivalence about these two protective actions as people threatened by actual tornadoes.  相似文献   

12.
Elizabethkingia spp. are common environmental pathogens responsible for infections in more vulnerable populations. Although the exposure routes of concern are not well understood, some hospital-associated outbreaks have indicated possible waterborne transmission. In order to facilitate quantitative microbial risk assessment (QMRA) for Elizabethkingia spp., this study fit dose–response models to frog and mice datasets that evaluated intramuscular and intraperitoneal exposure to Elizabethkingia spp. The frog datasets could be pooled, and the exact beta-Poisson model was the best fitting model with optimized parameters α  = 0.52 and β = 86,351. Using the exact beta-Poisson model, the dose of Elizabethkingia miricola resulting in a 50% morbidity response (LD50) was estimated to be approximately 237,000 CFU. The model developed herein was used to estimate the probability of infection for a hospital patient under a modeled exposure scenario involving a contaminated medical device and reported Elizabethkingia spp. concentrations isolated from hospital sinks after an outbreak. The median exposure dose was approximately 3 CFU/insertion event, and the corresponding median risk of infection was 3.4E-05. The median risk estimated in this case study was lower than the 3% attack rate observed in a previous outbreak, however, there are noted gaps pertaining to the possible concentrations of Elizabethkingia spp. in tap water and the most likely exposure routes. This is the first dose–response model developed for Elizabethkingia spp. thus enabling future risk assessments to help determine levels of risk and potential effective risk management strategies.  相似文献   

13.
Toxoplasma gondii is a protozoan parasite that is responsible for approximately 24% of deaths attributed to foodborne pathogens in the United States. It is thought that a substantial portion of human T. gondii infections is acquired through the consumption of meats. The dose‐response relationship for human exposures to T. gondii‐infected meat is unknown because no human data are available. The goal of this study was to develop and validate dose‐response models based on animal studies, and to compute scaling factors so that animal‐derived models can predict T. gondii infection in humans. Relevant studies in literature were collected and appropriate studies were selected based on animal species, stage, genotype of T. gondii, and route of infection. Data were pooled and fitted to four sigmoidal‐shaped mathematical models, and model parameters were estimated using maximum likelihood estimation. Data from a mouse study were selected to develop the dose‐response relationship. Exponential and beta‐Poisson models, which predicted similar responses, were selected as reasonable dose‐response models based on their simplicity, biological plausibility, and goodness fit. A confidence interval of the parameter was determined by constructing 10,000 bootstrap samples. Scaling factors were computed by matching the predicted infection cases with the epidemiological data. Mouse‐derived models were validated against data for the dose‐infection relationship in rats. A human dose‐response model was developed as P (d) = 1–exp (–0.0015 × 0.005 × d) or P (d) = 1–(1 + d × 0.003 / 582.414)?1.479. Both models predict the human response after consuming T. gondii‐infected meats, and provide an enhanced risk characterization in a quantitative microbial risk assessment model for this pathogen.  相似文献   

14.
This study developed dose response models for determining the probability of eye or central nervous system infections from previously conducted studies using different strains of Acanthamoeba spp. The data were a result of animal experiments using mice and rats exposed corneally and intranasally to the pathogens. The corneal inoculations of Acanthamoeba isolate Ac 118 included varied amounts of Corynebacterium xerosis and were best fit by the exponential model. Virulence increased with higher levels of C. xerosis. The Acanthamoeba culbertsoni intranasal study with death as an endpoint of response was best fit by the beta‐Poisson model. The HN‐3 strain of A. castellanii was studied with an intranasal exposure and three different endpoints of response. For all three studies, the exponential model was the best fit. A model based on pooling data sets of the intranasal exposure and death endpoint resulted in an LD50 of 19,357 amebae. The dose response models developed in this study are an important step towards characterizing the risk associated with free‐living amoeba like Acanthamoeba in drinking water distribution systems. Understanding the human health risk posed by free‐living amoeba will allow for quantitative microbial risk assessments that support building design decisions to minimize opportunities for pathogen growth and survival.  相似文献   

15.
Leptospirosis is a preeminent zoonotic disease concentrated in tropical areas, and prevalent in both industrialized and rural settings. Dose‐response models were generated from 22 data sets reported in 10 different studies. All of the selected studies used rodent subjects, primarily hamsters, with the predominant endpoint as mortality with the challenge strain administered intraperitoneally. Dose‐response models based on a single evaluation postinfection displayed median lethal dose (LD50) estimates that ranged between 1 and 107 leptospirae depending upon the strain's virulence and the period elapsed since the initial exposure inoculation. Twelve of the 22 data sets measured the number of affected subjects daily over an extended period, so dose‐response models with time‐dependent parameters were estimated. Pooling between data sets produced seven common dose‐response models and one time‐dependent model. These pooled common models had data sets with different test subject hosts, and between disparate leptospiral strains tested on identical hosts. Comparative modeling was done with parallel tests to test the effects of a single different variable of either strain or test host and quantify the difference by calculating a dose multiplication factor. Statistical pooling implies that the mechanistic processes of leptospirosis can be represented by the same dose‐response model for different experimental infection tests even though they may involve different host species, routes, and leptospiral strains, although the cause of this pathophysiological phenomenon has not yet been identified.  相似文献   

16.
U.S. Environment Protection Agency benchmark doses for dichotomous cancer responses are often estimated using a multistage model based on a monotonic dose‐response assumption. To account for model uncertainty in the estimation process, several model averaging methods have been proposed for risk assessment. In this article, we extend the usual parameter space in the multistage model for monotonicity to allow for the possibility of a hormetic dose‐response relationship. Bayesian model averaging is used to estimate the benchmark dose and to provide posterior probabilities for monotonicity versus hormesis. Simulation studies show that the newly proposed method provides robust point and interval estimation of a benchmark dose in the presence or absence of hormesis. We also apply the method to two data sets on carcinogenic response of rats to 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin.  相似文献   

17.
The aim of this study was to evaluate the effects of implemented control measures to reduce illness induced by Vibrio parahaemolyticus (V. parahaemolyticus) in horse mackerel (Trachurus japonicus), seafood that is commonly consumed raw in Japan. On the basis of currently available experimental and survey data, we constructed a quantitative risk model of V. parahaemolyticus in horse mackerel from harvest to consumption. In particular, the following factors were evaluated: bacterial growth at all stages, effects of washing the fish body and storage water, and bacterial transfer from the fish surface, gills, and intestine to fillets during preparation. New parameters of the beta‐Poisson dose‐response model were determined from all human feeding trials, some of which have been used for risk assessment by the U.S. Food and Drug Administration (USFDA). The probability of illness caused by V. parahaemolyticus was estimated using both the USFDA dose‐response parameters and our parameters for each selected pathway of scenario alternatives: washing whole fish at landing, storage in contaminated water, high temperature during transportation, and washing fish during preparation. The last scenario (washing fish during preparation) was the most effective for reducing the risk of illness by about a factor of 10 compared to no washing at this stage. Risk of illness increased by 50% by exposure to increased temperature during transportation, according to our assumptions of duration and temperature. The other two scenarios did not significantly affect risk. The choice of dose‐response parameters was not critical for evaluation of control measures.  相似文献   

18.
Carmen Keller 《Risk analysis》2011,31(7):1043-1054
Previous experimental research provides evidence that a familiar risk comparison within a risk ladder is understood by low‐ and high‐numerate individuals. It especially helps low numerates to better evaluate risk. In the present study, an eye tracker was used to capture individuals’ visual attention to a familiar risk comparison, such as the risk associated with smoking. Two parameters of information processing—efficiency and level—were derived from visual attention. A random sample of participants from the general population (N= 68) interpreted a given risk level with the help of the risk ladder. Numeracy was negatively correlated with overall visual attention on the risk ladder (rs=?0.28, p= 0.01), indicating that the lower the numeracy, the more the time spent looking at the whole risk ladder. Numeracy was positively correlated with the efficiency of processing relevant frequency (rs= 0.34, p < 0.001) and relevant textual information (rs= 0.34, p < 0.001), but not with the efficiency of processing relevant comparative information and numerical information. There was a significant negative correlation between numeracy and the level of processing of relevant comparative risk information (rs=?0.21, p < 0.01), indicating that low numerates processed the comparative risk information more deeply than the high numerates. There was no correlation between numeracy and perceived risk. These results add to previous experimental research, indicating that the smoking risk comparison was crucial for low numerates to evaluate and understand risk. Furthermore, the eye‐tracker method is promising for studying information processing and improving risk communication formats.  相似文献   

19.
In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.  相似文献   

20.
In this paper, we study the parameterized complexity of Dominating Set problem in chordal graphs and near chordal graphs. We show the problem is W[2]-hard and cannot be solved in time n o(k) in chordal and s-chordal (s>3) graphs unless W[1]=FPT. In addition, we obtain inapproximability results for computing a minimum dominating set in chordal and near chordal graphs. Our results prove that unless NP=P, the minimum dominating set in a chordal or s-chordal (s>3) graph cannot be approximated within a ratio of \fracc3lnn\frac{c}{3}\ln{n} in polynomial time, where n is the number of vertices in the graph and 0<c<1 is the constant from the inapproximability of the minimum dominating set in general graphs. In other words, our results suggest that restricting to chordal or s-chordal graphs can improve the approximation ratio by no more than a factor of 3. We then extend our techniques to find similar results for the Independent Dominating Set problem and the Connected Dominating Set problem in chordal or near chordal graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号