首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The lazy bureaucrat scheduling problem was first introduced by Arkin et al. (Inf Comput 184:129–146, 2003). Since then, a number of variants have been addressed. However, very little is known on the online version. In this note we focus on the scenario of online scheduling, in which the jobs arrive over time. The bureaucrat (machine) has a working time interval. Namely, he has a deadline by which all scheduled jobs must be completed. A decision is only based on released jobs without any information on the future. We consider two objective functions of [min-makespan] and [min-time-spent]. Both admit best possible online algorithms with competitive ratio of \(\frac{\sqrt{5}+1}{2}\approx 1.618\).  相似文献   

2.
This paper is a challenge from a pair of lifelong technical specialists in risk assessment for the risk-management community to better define social decision criteria for risk acceptance vs. risk control in relation to the issues of variability and uncertainty. To stimulate discussion, we offer a variety of straw man proposals about where we think variability and uncertainty are likely to matter for different types of social policy considerations in the context of a few different kinds of decisions. In particular, we draw on recent presentations of uncertainty and variability data that have been offered by EPA in the context of the consideration of revised ambient air quality standards under the Clean Air Act.  相似文献   

3.
Since Sedlá\(\breve{\hbox {c}}\)ek introduced the notion of magic labeling of a graph in 1963, a variety of magic labelings of a graph have been defined and studied. In this paper, we study consecutive edge magic labelings of a connected bipartite graph. We make a useful observation that there are only four possible values of b for which a connected bipartite graph has a b-edge consecutive magic labeling. On the basis of this fundamental result, we deduce various interesting results on consecutive edge magic labelings of bipartite graphs. As a matter of fact, we do not focus just on specific classes of graphs, but also discuss the more general classes of non-bipartite and bipartite graphs.  相似文献   

4.
Brand  Kevin P.  Rhomberg  Lorenz  Evans  John S. 《Risk analysis》1999,19(2):295-308
The prominent role of animal bioassay evidence in environmental regulatory decisions compels a careful characterization of extrapolation uncertainties. In noncancer risk assessment, uncertainty factors are incorporated to account for each of several extrapolations required to convert a bioassay outcome into a putative subthreshold dose for humans. Measures of relative toxicity taken between different dosing regimens, different endpoints, or different species serve as a reference for establishing the uncertainty factors. Ratios of no observed adverse effect levels (NOAELs) have been used for this purpose; statistical summaries of such ratios across sets of chemicals are widely used to guide the setting of uncertainty factors. Given the poor statistical properties of NOAELs, the informativeness of these summary statistics is open to question. To evaluate this, we develop an approach to calibrate the ability of NOAEL ratios to reveal true properties of a specified distribution for relative toxicity. A priority of this analysis is to account for dependencies of NOAEL ratios on experimental design and other exogenous factors. Our analysis of NOAEL ratio summary statistics finds (1) that such dependencies are complex and produce pronounced systematic errors and (2) that sampling error associated with typical sample sizes (50 chemicals) is non-negligible. These uncertainties strongly suggest that NOAEL ratio summary statistics cannot be taken at face value; conclusions based on such ratios reported in well over a dozen published papers should be reconsidered.  相似文献   

5.
Performance Assessment (PA) is the use of mathematical models to simulate the long-term behavior of engineered and geologic barriers in a nuclear waste repository; methods of uncertainty analysis are used to assess effects of parametric and conceptual uncertainties associated with the model system upon the uncertainty in outcomes of the simulation. PA is required by the U.S. Environmental Protection Agency as part of its certification process for geologic repositories for nuclear waste. This paper is a dialogue to explore the value and limitations of PA. Two skeptics acknowledge the utility of PA in organizing the scientific investigations that are necessary for confident siting and licensing of a repository; however, they maintain that the PA process, at least as it is currently implemented, is an essentially unscientific process with shortcomings that may provide results of limited use in evaluating actual effects on public health and safety. Conceptual uncertainties in a PA analysis can be so great that results can be confidently applied only over short time ranges, the antithesis of the purpose behind long-term, geologic disposal. Two proponents of PA agree that performance assessment is unscientific, but only in the sense that PA is an engineering analysis that uses existing scientific knowledge to support public policy decisions, rather than an investigation intended to increase fundamental knowledge of nature; PA has different goals and constraints than a typical scientific study. The proponents describe an ideal, six-step process for conducting generalized PA, here called probabilistic systems analysis (PSA); they note that virtually all scientific content of a PA is introduced during the model-building steps of a PSA; they contend that a PA based on simple but scientifically acceptable mathematical models can provide useful and objective input to regulatory decision makers. The value of the results of any PA must lie between these two views and will depend on the level of knowledge of the site, the degree to which models capture actual physical and chemical processes, the time over which extrapolations are made, and the proper evaluation of health risks attending implementation of the repository. The challenge is in evaluating whether the quality of the PA matches the needs of decision makers charged with protecting the health and safety of the public.  相似文献   

6.
不确定性支持向量分类预警算法   总被引:4,自引:0,他引:4  
支持向量分类从理论上可以保证好的预警外推能力。但历史警度的确定是一个十分棘手的问题。论文提出了不确定性支持向量分类预警方法。将支持向量分类预警问题转化为各个历史样本的惩罚系数的合理变化,从而大大减少了约束的个数,体现了专家决策在预警系统的作用。不仅实现了专家意见的综合,而且是对SVM理论本身的拓广。证明了模糊支持向量机是不确定性支持向量分类的特例,从而给出了模糊支持向量机的确切含义。数据试验表明,未确知支持向量分类预警方法具有一定的实际应用价值。  相似文献   

7.
Combining Probability Distributions From Experts in Risk Analysis   总被引:33,自引:0,他引:33  
This paper concerns the combination of experts' probability distributions in risk analysis, discussing a variety of combination methods and attempting to highlight the important conceptual and practical issues to be considered in designing a combination process in practice. The role of experts is important because their judgments can provide valuable information, particularly in view of the limited availability of hard data regarding many important uncertainties in risk analysis. Because uncertainties are represented in terms of probability distributions in probabilistic risk analysis (PRA), we consider expert information in terms of probability distributions. The motivation for the use of multiple experts is simply the desire to obtain as much information as possible. Combining experts' probability distributions summarizes the accumulated information for risk analysts and decision-makers. Procedures for combining probability distributions are often compartmentalized as mathematical aggregation methods or behavioral approaches, and we discuss both categories. However, an overall aggregation process could involve both mathematical and behavioral aspects, and no single process is best in all circumstances. An understanding of the pros and cons of different methods and the key issues to consider is valuable in the design of a combination process for a specific PRA. The output, a combined probability distribution, can ideally be viewed as representing a summary of the current state of expert opinion regarding the uncertainty of interest.  相似文献   

8.
An analysis of the uncertainty in guidelines for the ingestion of methylmercury (MeHg) due to human pharmacokinetic variability was conducted using a physiologically based pharmacokinetic (PBPK) model that describes MeHg kinetics in the pregnant human and fetus. Two alternative derivations of an ingestion guideline for MeHg were considered: the U.S. Environmental Protection Agency reference dose (RfD) of 0.1 g/kg/day derived from studies of an Iraqi grain poisoning episode, and the Agency for Toxic Substances and Disease Registry chronic oral minimal risk level (MRL) of 0.5 g/kg/day based on studies of a fish-eating population in the Seychelles Islands. Calculation of an ingestion guideline for MeHg from either of these epidemiological studies requires calculation of a dose conversion factor (DCF) relating a hair mercury concentration to a chronic MeHg ingestion rate. To evaluate the uncertainty in this DCF across the population of U.S. women of child-bearing age, Monte Carlo analyses were performed in which distributions for each of the parameters in the PBPK model were randomly sampled 1000 times. The 1st and 5th percentiles of the resulting distribution of DCFs were a factor of 1.8 and 1.5 below the median, respectively. This estimate of variability is consistent with, but somewhat less than, previous analyses performed with empirical, one-compartment pharmacokinetic models. The use of a consistent factor in both guidelines of 1.5 for pharmacokinetic variability in the DCF, and keeping all other aspects of the derivations unchanged, would result in an RfD of 0.2 g/kg/day and an MRL of 0.3 g/kg/day.  相似文献   

9.
Mixed Levels of Uncertainty in Complex Policy Models   总被引:3,自引:0,他引:3  
The characterization and treatment of uncertainty poses special challenges when modeling indeterminate or complex coupled systems such as those involved in the interactions between human activity, climate and the ecosystem. Uncertainty about model structure may become as, or more important than, uncertainty about parameter values. When uncertainty grows so large that prediction or optimization no longer makes sense, it may still be possible to use the model as a behavioral test bed to examine the relative robustness of alternative observational and behavioral strategies. When models must be run into portions of their phase space that are not well understood, different submodels may become unreliable at different rates. A common example involves running a time stepped model far into the future. Several strategies can be used to deal with such situations. The probability of model failure can be reported as a function of time. Possible alternative surprises can be assigned probabilities, modeled separately, and combined. Finally, through the use of subjective judgments, one may be able to combine, and over time shift between models, moving from more detailed to progressively simpler order-of-magnitude models, and perhaps ultimately, on to simple bounding analysis.  相似文献   

10.
An independent set of a graph G is a set of pairwise non-adjacent vertices. Let \(i_k = i_k(G)\) be the number of independent sets of cardinality k of G. The independence polynomial \(I(G, x)=\sum _{k\geqslant 0}i_k(G)x^k\) defined first by Gutman and Harary has been the focus of considerable research recently, whereas \(i(G)=I(G, 1)\) is called the Merrifield–Simmons index of G. In this paper, we first proved that among all trees of order n,  the kth coefficient \(i_k\) is smallest when the tree is a path, and is largest for star. Moreover, the graph among all trees of order n with diameter at least d whose all coefficients of I(Gx) are largest is identified. Then we identify the graphs among the n-vertex unicyclic graphs (resp. n-vertex connected graphs with clique number \(\omega \)) which simultaneously minimize all coefficients of I(Gx), whereas the opposite problems of simultaneously maximizing all coefficients of I(Gx) among these two classes of graphs are also solved respectively. At last we characterize the graph among all the n-vertex connected graph with chromatic number \(\chi \) (resp. vertex connectivity \(\kappa \)) which simultaneously minimize all coefficients of I(Gx). Our results may deduce some known results on Merrifield–Simmons index of graphs.  相似文献   

11.
We consider the online matching problem, where n server-vertices lie in a metric space and n request-vertices that arrive over time each must immediately be permanently assigned to a server-vertex. We focus on the egalitarian bottleneck objective, where the goal is to minimize the maximum distance between any request and its server. It has been shown that while there are effective algorithms for the utilitarian objective (minimizing total cost) in the resource augmentation setting where the offline adversary has half the resources, these are not effective for the egalitarian objective. Thus, we propose a new Serve-or-Skip (SoS) bicriteria analysis model, where the online algorithm may reject or skip up to a specified number of requests, and propose two greedy algorithms: GriNN(t) and \({{\textsc {Grin}}^*(t)}\). We show that the SoS model of resource augmentation analysis can essentially simulate the doubled-server-capacity model, and then examine the performance of GriNN(t) and \({\textsc {Grin}^*(t)}\).  相似文献   

12.
The Electric Power Research Institute (EPRI) has sponsored the development of a model to assess the long-term, overall performance of the candidate spent fuel and high-level radioactive waste (HLW) disposal facility at Yucca Mountain, Nevada. The model simulates the processes that lead to HLW container corrosion, HLW mobilization from the spent fuel, and transport by groundwater, and contaminated groundwater usage by future hypothetical individuals leading to radiation doses to those individuals. The model must incorporate a multitude of complex, coupled processes across a variety of technical disciplines. Furthermore, because of the very long time frames involved in the modeling effort (104 years), the relative lack of directly applicable data, and many uncertainties and variabilities in those data, a probabilistic approach to model development was necessary. The developers of the model chose a logic tree approach to represent uncertainties in both conceptual models and model parameter values. The developers felt the logic tree approach was the most appropriate. This paper discusses the value and use of logic trees applied to assessing the uncertainties in HLW disposal, the components of the model, and a few of the results of that model. The paper concludes with a comparison of logic trees and Monte Carlo approaches.  相似文献   

13.
14.
We consider the bus evacuation problem. Given a positive integer B, a bipartite graph G with parts S and \(T \cup \{r\}\) in a metric space and functions \(l_i :S \rightarrow {\mathbb {Z}}_+\) and \({u_j :T \rightarrow \mathbb {Z}_+ \cup \{\infty \}}\), one wishes to find a set of B walks in G. Every walk in B should start at r and finish in T and r must be visited only once. Also, among all walks, each vertex i of S must be visited at least \(l_i\) times and each vertex j of T must be visited at most \(u_j\) times. The objective is to find a solution that minimizes the length of the longest walk. This problem arises in emergency planning situations where the walks correspond to the routes of B buses that must transport each group of people in S to a shelter in T, and the objective is to evacuate the entire population in the minimum amount of time. In this paper, we prove that approximating this problem by less than a constant is \(\text{ NP }\)-hard and present a 10.2-approximation algorithm. Further, for the uncapacitated BEP, in which \(u_j\) is infinity for each j, we give a 4.2-approximation algorithm.  相似文献   

15.
Cox LA 《Risk analysis》2011,31(10):1530-3; discussion 1538-42
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.  相似文献   

16.
In this paper, we prove that a simple graph G of order sufficiently large n with the minimal degree \(\delta (G)\ge k\ge 2\) is Hamilton-connected except for two classes of graphs if the number of edges in G is at least \(\frac{1}{2}(n^2-(2k-1)n + 2k-2)\). In addition, this result is used to present sufficient spectral conditions for a graph with large minimum degree to be Hamilton-connected in terms of spectral radius or signless Laplacian spectral radius, which extends the results of (Zhou and Wang in Linear Multilinear Algebra 65(2):224–234, 2017) for sufficiently large n. Moreover, we also give a sufficient spectral condition for a graph with large minimum degree to be Hamilton-connected in terms of spectral radius of its complement graph.  相似文献   

17.
In the paper we study \(\lambda \) -numbers of several classes of snarks. We show that the \(\lambda \) -number of each Blanu \(\breve{s}\) a snark, Flower snark and Goldberg snark is \(6\) . For \(n\ge 2\) , we show that there is a dot product of \(n\) Petersen graphs such that its \(\lambda \) -number is 6.  相似文献   

18.
This paper studies a new version of the location problem called the mixed center location problem. Let P be a set of n points in the plane. We first consider the mixed 2-center problem, where one of the centers must be in P, and we solve it in \(O(n^2\log n)\) time. Second, we consider the mixed k-center problem, where m of the centers are in P, and we solve it in \(O(n^{m+O(\sqrt{k-m})})\) time. Motivated by two practical constraints, we propose two variations of the problem. Third, we present a 2-approximation algorithm and three heuristics solving the mixed k-center problem (\(k>2\)).  相似文献   

19.
A resource-sharing system is modeled by a hypergraph H in which a vertex represents a process and an edge represents a resource consisting of all vertices (processes) that have access to it. A schedule of H=(V,E) is a mapping f:?→2 V , where f(i) is an independent set of H which consists of processes that operate at round i. The rate of f is defined as \({\rm rate}(f)=\limsup_{n\to\infty}\sum_{i=1}^{n}|f(i)|/(n|V|)\), which is the average fraction of operating processes at each round. The purpose of this paper is to study optimal rates for various classes of schedules under different fairness conditions. In particular, we give relations between these optimal rates and fractional/circular chromatic numbers. For the special case of the hypergraph is a connected graph, a new derivation for the previous result by Yeh and Zhu is also given.  相似文献   

20.
A coloring of a graph \(G=(V,E)\) is a partition \(\{V_1, V_2, \ldots , V_k\}\) of V into independent sets or color classes. A vertex \(v\in V_i\) is a Grundy vertex if it is adjacent to at least one vertex in each color class \(V_j\) for every \(j<i\). A coloring is a Grundy coloring if every vertex is a Grundy vertex, and the Grundy number \(\Gamma (G)\) of a graph G is the maximum number of colors in a Grundy coloring. We provide two new upper bounds on Grundy number of a graph and a stronger version of the well-known Nordhaus-Gaddum theorem. In addition, we give a new characterization for a \(\{P_{4}, C_4\}\)-free graph by supporting a conjecture of Zaker, which says that \(\Gamma (G)\ge \delta (G)+1\) for any \(C_4\)-free graph G.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号