首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Annual data from the Finnish National Salmonella Control Programme were used to build up a probabilistic transmission model of salmonella in the primary broiler production chain. The data set consisted of information on grandparent, parent, and broiler flock populations. A probabilistic model was developed to describe the unknown true prevalences, vertical and horizontal transmissions, as well as the dynamical model of infections. By combining these with the observed data, the posterior probability distributions of the unknown parameters and variables could be derived. Predictive distributions were derived for the true number of infected broiler flocks under the adopted intervention scheme and these were compared with the predictions under no intervention. With the model, the effect of the intervention used in the programme, i.e., eliminating salmonella positive breeding flocks, could be quantitatively assessed. The 95% probability interval of the posterior predictive distribution for (broiler) flock prevalence under current (1999) situation was [1.3%-17.4%] (no intervention), and [0.9%-5.8%] (with intervention). In the scenario of one infected grandparent flock, these were [2.8%-43.1%] and [1.0%-5.9%], respectively. Computations were performed using WinBUGS and Matlab softwares.  相似文献   

2.
We consider a two-agent scheduling problem on a single machine, where the objective is to minimize the total completion time of the first agent with the restriction that the number of tardy jobs of the second agent cannot exceed a given number. It is reported in the literature that the complexity of this problem is still open. We show in this paper that this problem is NP-hard under high multiplicity encoding and can be solved in pseudo-polynomial time under binary encoding. When the first agent's objective is to minimize the total weighted completion time, we show that the problem is strongly NP-hard even when the number of tardy jobs of the second agent is restricted to be zero.  相似文献   

3.

Multiprocessor scheduling, also called scheduling on parallel identical machines to minimize the makespan, is a classic optimization problem which has been extensively studied. Scheduling with testing is an online variant, where the processing time of a job is revealed by an extra test operation, otherwise the job has to be executed for a given upper bound on the processing time. Albers and Eckl recently studied the multiprocessor scheduling with testing; among others, for the non-preemptive setting they presented an approximation algorithm with competitive ratio approaching 3.1016 when the number of machines tends to infinity and an improved approximation algorithm with competitive ratio approaching 3 when all test operations take one unit of time each. We propose to first sort the jobs into non-increasing order of the minimum value between the upper bound and the testing time, then partition the jobs into three groups and process them group by group according to the sorted job order. We show that our algorithm achieves better competitive ratios, which approach 2.9513 when the number of machines tends to infinity in the general case; when all test operations each takes one time unit, our algorithm achieves even better competitive ratios approaching 2.8081.

  相似文献   

4.
We consider the stochastic, single‐machine earliness/tardiness problem (SET), with the sequence of processing of the jobs and their due‐dates as decisions and the objective of minimizing the sum of the expected earliness and tardiness costs over all the jobs. In a recent paper, Baker ( 2014 ) shows the optimality of the Shortest‐Variance‐First (SVF) rule under the following two assumptions: (a) The processing duration of each job follows a normal distribution. (b) The earliness and tardiness cost parameters are the same for all the jobs. In this study, we consider problem SET under assumption (b). We generalize Baker's result by establishing the optimality of the SVF rule for more general distributions of the processing durations and a more general objective function. Specifically, we show that the SVF rule is optimal under the assumption of dilation ordering of the processing durations. Since convex ordering implies dilation ordering (under finite means), the SVF sequence is also optimal under convex ordering of the processing durations. We also study the effect of variability of the processing durations of the jobs on the optimal cost. An application of problem SET in surgical scheduling is discussed.  相似文献   

5.
Load-oriented manufacturing control (LOMC), a well known probabilistic approach to workload control, is based on limiting and smoothing workload using one static parameter for each workcentre, called load limit (LL). The value of this parameter is set by the shop managers based on the planned lead time at each workcentre. In this paper the use of LL is shown to be inappropriate for the smoothing of workloads when the workload is not sufficiently balanced. We propose to enhance the LOMC model by introducing two sets of parameters:

(i) limiting parameters (LPs), that are statical parameters of the workcentres, set by the shop managers. LPs are used to limit the workload released to the shop;

(ii) smoothing parameters (SPs), that are dynamical parameters of the workcentres, computed as a function of their real workload. SPs are used to smooth the jobs workload over downstream workcentres.

A simulation model was used to compare the enhanced model, based on two parameters sets, with the traditional LOMC model, based on a single parameter set. The simulation runs were earned out with different conditions of due-date assignments, dispatching rules and production mix. The statistical analysis performed on experimental results confirmed that the enhanced model achieves significantly better due dates under unbalanced workload conditions.  相似文献   

6.
A methodology is presented to investigate the recurrence of extraordinary events. The approach is fully general and complies with a canon of inference establishing a set of basic rationality requirements scientific reasoning should satisfy. In particular, we apply it to model the interarrival time between disastrous oil spills in the Galician coast in the northwest of Spain, one of the greatest risk areas in the world, as confirmed by the Prestige accident of November 2002. We formulate the problem within the logical probability framework, using plausible logic languages with observations to allow the appropriate expression of evidences. Therein, inference is regarded as the joint selection of a pair of reference and inferred probability distributions, which better encode the knowledge about potential times between incidents provided by the available evidences and other higher-order information at hand. To solve it, we employ the REF relative entropy method with fractile constraints. Next, we analyze the variability of the joint entropic solution, as knowledge that a time has elapsed since the last recorded spill is added, by conditioning the evidences. Attention is paid to the variability of two representative parameters: the average reference recurrence time and an inferred characteristic probability fractile for the time to an event. In contrast with classical results, the salient consequence is their nonconstancy with the elapsed time and the appearance of a variability pattern indicating an observational memory, even under the assumption of one-parameter exponential models, traditionally regarded as memoryless. Tanker accidentality is therefore dynamic, changing as time goes on with no further accidents. Generality of the methodology entails that identical conclusions would apply to hazard modeling of any other kind of extraordinary phenomena. This should be considered in risk assessment and management.  相似文献   

7.
We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the dynamic programming (DP) solution algorithm with the Bayesian Markov chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though the number of grid points on the state variable is small per solution‐estimation iteration, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the “curse of dimensionality.” We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.  相似文献   

8.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

9.
In this paper, we study a scheduling model as follows: there are n jobs which can be processed in house on a single machine or subcontracted to a subcontractor. If a job is subcontracted, its processing cost is different from the in-house cost and its delivery lead time is a stepwise function of the total processing time of outsourced jobs. Two objective functions are studied (1) to minimize the weighted sum of the maximal completion time and the total processing cost and (2) to minimize the weighted sum of the number of tardy jobs and the total processing cost. For the first problem, we prove that it is NP-hard and get a pseudo-polynomial time algorithm. For the second problem, we prove that it is NP-hard and get a pseudo-polynomial time algorithm for a special case.  相似文献   

10.
Condition-based maintenance is analyzed for multi-stage production with a separate maintenance department. It is assumed that the conditions of the machines deteriorate as a function of multiple production parameters and that the task of maintenance is to keep up predefined operational availabilities of the individual machines. In this context the problem of determining the optimal machine condition that triggers the release of a preventive maintenance job and the problem of scheduling maintenance jobs at the maintenance department arise. Existing approaches to solve these problems either assume a monolithic production/maintenance system or concentrate on a decentralized system in which the information flow and resource transfer do not cause delays. With our paper we aim at (1) deriving a triggering mechanism that is able to cope with relaxed assumptions and at (2) developing specific priority rules for scheduling maintenance jobs. Therefore, in this paper a specific continuous condition monitoring and a suitable information exchange protocol are developed, factors determining the release situation are operationalized, impacts of choosing the triggering conditions are identified and the components of specific priority rules for scheduling maintenance jobs are clearly elaborated. Finally the performance of the resulting solution approach is analyzed by simulations. Thereby, relevant characteristics of the production/maintenance system, the maintenance task and relevant priority rules are varied systematically. This research contributes answers to the questions on how the exchange of local information can be structured, the parameters of condition-based maintenance can be set and on what maintenance-specific priority rules can be applied in case of incomplete information about deterioration in a decentralized multistage production/maintenance system.  相似文献   

11.
具有概率分布在线租赁问题策略研究   总被引:7,自引:5,他引:7  
在经济系统中,决策越来越呈现出在线性特征,传统优化方法在解决这类在线问题时,通常假设未来输入是一随机变量从而寻求概率意义上的最优决策。近年来,在优化领域兴起了一种新的研究方法——在线算法与竞争分析,为解决这类在线问题提供了新的视角,但传统的竞争分析方法有意规避概率分布假设。对于在线租赁决策问题,由于其输入结构简单且具有良好的统计性质,似乎忽略这些有用的信息而只运用标准的竞争比方法分析显然具有不足之处。在本文中,我们将其输入结构的概率分布引入纯竞争分析方法中,从而建立了具有概率情形的最优在线租赁模型,并得到了最优竞争策略及其竞争比。  相似文献   

12.
This paper demonstrates a new methodology for probabilistic public health risk assessment using the first-order reliability method. The method provides the probability that incremental lifetime cancer risk exceeds a threshold level, and the probabilistic sensitivity quantifying the relative impact of considering the uncertainty of each random variable on the exceedance probability. The approach is applied to a case study given by Thompson et al. (1) on cancer risk caused by ingestion of benzene-contaminated soil, and the results are compared to that of the Monte Carlo method. Parametric sensitivity analyses are conducted to assess the sensitivity of the probabilistic event with respect to the distribution parameters of the basic random variables, such as the mean and standard deviation. The technique is a novel approach to probabilistic risk assessment, and can be used in situations when Monte Carlo analysis is computationally expensive, such as when the simulated risk is at the tail of the risk probability distribution.  相似文献   

13.
概率预期下在线报童问题的最小风险策略   总被引:1,自引:0,他引:1  
报童问题是库存管理中一个基本模型。已有的报童模型主要利用均值-方差方法和期望效用目标方法进行风险的描述和度量。这些方法假设需求分布信息已知,而实际中需求分布信息往往难以完全刻画。本文使用概率预期作为刻画不完全需求分布信息的格式,基于在线风险补偿的思想,为需求分布信息不完全的报童问题建立了最小风险模型。使用该模型设计了最小风险策略,使报童可以根据自己设定的不同收益和未来概率预期选择最优订购量。  相似文献   

14.
One of the major issues in the development of large, rule-based expert systems is related to improving their performance efficiency. One way to address this issue is by reducing the number of unsuccessful tries a system goes through before executing a rule to establish a goal or an intermediary fact. On the average, the number of unsuccessful tries can be reduced if the rules that are tried first are those that are expected to execute most frequently, and this can be established by extracting information on the probability distributions of the input parameters. In this paper, a rule base is modeled as a network and simulated to investigate potential performance improvements by changing the order used to test the rules. The model of the rule base is also used to investigate performance gains achieved by parameter factorization and premise clause reordering.  相似文献   

15.
We consider the one-machine scheduling problem to minimize the number of late jobs under the group technology assumption, where jobs are classified into groups and all jobs from the same group must be processed contiguously. This problem is shown to be strongly NP-hard, even for the case of unit processing time and zero set-up time. A polynomial time algorithm is developed for the restricted version in which the jobs in each group have the same due date. However, the problem is proved to be ordinarily NP-hard if the jobs in a group have the same processing time as well as the same due date.  相似文献   

16.
Mathieu Lefbvre 《LABOUR》2012,26(2):137-155
This paper presents a model where young and old workers compete for one type of jobs in the presence of retirement opportunity. Within this framework, we show that increased retirement opportunities (such as a decrease of the retirement age) has most of the time a depressing impact on the unemployment rate. Indeed the number of vacancies posted by firms is influenced by the probability that an old worker is going into retirement. We show that the degree to which younger workers are influenced by retirement of older workers depends on the relative productivity of young and older workers. It is only when older workers are much more productive than young workers that retirement may benefit to unemployment.  相似文献   

17.
In recent years, the general binary quadratic programming (BQP) model has been widely applied to solve a number of combinatorial optimization problems. In this paper, we recast the maximum vertex weight clique problem (MVWCP) into this model which is then solved by a probabilistic tabu search algorithm designed for the BQP. Experimental results on 80 challenging DIMACS-W and 40 BHOSLIB-W benchmark instances demonstrate that this general approach is viable for solving the MVWCP problem.  相似文献   

18.
The maximum clique problem is a classical problem in combinatorial optimization that has a broad range of applications in graph-based data mining, social and biological network analysis and a variety of other fields. This article investigates the problem when the edges fail independently with known probabilities. This leads to the maximum probabilistic clique problem, which is to find a subset of vertices of maximum cardinality that forms a clique with probability at least \(\theta \in [0,1]\) , which is a user-specified probability threshold. We show that the probabilistic clique property is hereditary and extend a well-known exact combinatorial algorithm for the maximum clique problem to a sampling-free exact algorithm for the maximum probabilistic clique problem. The performance of the algorithm is benchmarked on a test-bed of DIMACS clique instances and on a randomly generated test-bed.  相似文献   

19.
针对供应链主体之间的风险转移问题,在批发价格契约的基础上引入了附免赔额的保险协议,明确了损失的定义及其概率分布.探讨了风险厌恶情况下免赔额对决策主体目标函数的影响,并给出了最优值的确定条件.比较分析了风险中性情况下最优订货量的取值,结果表明在一定条件下供应链可以协调;最后通过数值分析的方式研究了相应参数对保险费的影响,为决策者在确定保险协议时提供了决策依据.  相似文献   

20.
In the analysis of the risk associated to rare events that may lead to catastrophic consequences with large uncertainty, it is questionable that the knowledge and information available for the analysis can be reflected properly by probabilities. Approaches other than purely probabilistic have been suggested, for example, using interval probabilities, possibilistic measures, or qualitative methods. In this article, we look into the problem and identify a number of issues that are foundational for its treatment. The foundational issues addressed reflect on the position that “probability is perfect” and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号