首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Optimal linear discriminant models maximize percentage accuracy for dichotomous classifications, but are rarely used because a theoretical framework that allows one to make valid statements about the statistical significance of the outcomes of such analyses does not exist. This paper describes an analytic solution for the theoretical distribution of optimal values for univariate optimal linear discriminant analysis, under the assumption that the data are random and continuous. We also present the theoretical distribution for sample sizes up to N= 30. The discovery of a statistical framework for evaluating the performance of optimal discriminant models should greatly increase their use by scientists in all disciplines.  相似文献   

2.
3.

We study minmax due-date based on common flow-allowance assignment and scheduling problems on a single machine, and extend known results in scheduling theory by considering convex resource allocation. The total cost function of a given job consists of its earliness, tardiness and flow-allowance cost components. Thus, the common flow-allowance and the actual jobs’ processing times are decision variables, implying that the due-dates and actual processing times can be controlled by allocating additional resource to the job operations. Consequently, our goal is to optimize a cost function by seeking the optimal job sequence, the optimal job-dependent due-dates along with the actual processing times. In all addressed problems we aim to minimize the maximal cost among all the jobs subject to a constraint on the resource consumption. We start by analyzing and solving the problem with position-independent workloads and then proceed to position-dependent workloads. Finally, the results are generalized to the method of common due-window. For all studied problems closed form solutions are provided, leading to polynomial time solutions.

  相似文献   

4.
In the manufacturing industry, preventive maintenance (PM) is carried out to minimise the probability of plant unexpected breakdown. Planned PM is preferred as disruption to operation is then minimised. Suggested PM intervals are normally prepared by the original equipment manufacturers (OEMs), however due to the multifaceted relationship between operating context and production requirement for different plants, it is unlikely that these suggested intervals as prescribed by the OEMs are optimal. Reliability, budget and breakdown outages cost are some of the critical factors that will affect the calculation of optimal maintenance intervals. Maintenance managers are required to determine optimal maintenance intervals with the above different requirements set by management. In this paper three models are proposed to calculate optimal maintenance intervals for multi-component system in a factory subjected to minimum required reliability, maximum allowable budget and minimum total cost. Numerical examples are provided to illustrate the application and usefulness of the proposed models.  相似文献   

5.
Many industrial products have three phases in their product lives: infant-mortality, normal, and wear-out phases. In the infant-mortality phase, the failure rate is high, but decreasing; in the normal phase, the failure rate remains constant; and in the wear-out phase, the failure rate is increasing. A burn-in procedure may be used to reduce early failures before shipping a product to consumers. A cost model is formulated to find the optimal burn-in time, which minimizes the expected sum of manufacturing cost, burn-in cost, and warranty cost incurred by failed items found during the warranty period. A mixture of Weibull hyperexponential distribution with shape parameter less than one and exponential distribution is used to describe the infant-mortality and the normal phases of the product life. The product under consideration can be either repairable or non-repairable. When the change-point of the product life distribution is unknown, it is estimated by using the maximum-likelihood estimation method. The effects of sample size on estimation error and the performance of the model are studied, and a sensitivity analysis is performed to study the effects of several parameters of the W-E distribution and costs on the optimal burn-in time.  相似文献   

6.

We study the optimal flow control for a manufacturing system subject to random failures and repairs. In most previous work, it has been proved that, for constant demand rates and exponential failure and repair times distributions of machines, the hedging point policy is optimal. The aim of this study is to extend the hedging point policy to non-exponential failure and repair times distributions and random demand rates models. The performance measure is the cost related to the inventory and back order penalties. We find that the structure of the hedging point policy can be parametrized by a single factor representing the critical stock level or threshold. With the corresponding hedging point policy, simulation experiments are used to construct input-output data from which an estimation of the incurred cost function is obtained through a regression analysis. The best parameter value of the related hedging point policy is derived from a minimum search of the obtained cost function. The extended hedging point policy is validated and shown to be quite effective. We find that the hedging point policy is also applicable to a wide variety of complex problems (i.e. non-exponential failure and repair times distributions and random demand rates), where analytical solutions may not be easily obtained.  相似文献   

7.
We develop variations of the M|G|1 queue to model the process of software maintenance within organizations and use these models to compute the optimal allocation of resources to software maintenance. User requests are assumed to arrive following a Poisson process and a binomial distribution is used to model duplication of requests. We obtain expressions for expected queue lengths with an exponential server using an N‐policy for an integer N≥1. We also obtain the optimal batching size and mean service rate by minimizing the total cost consisting of the cost of the server, the cost of waiting, and the fixed cost of maintenance, if applicable.  相似文献   

8.
Artificial neural networks are new methods for classification. We investigate two important issues in building neural network models; network architecture and size of training samples. Experiments were designed and carried out on two-group classification problems to find answers to these model building questions. The first experiment deals with selection of architecture and sample size for different classification problems. Results show that choice of architecture and choice of sample size depend on the objective: to maximize the classification rate of training samples, or to maximize the generalizability of neural networks. The second experiment compares neural network models with classical models such as linear discriminant analysis and quadratic discriminant analysis, and nonparametric methods such as k-nearest-neighbor and linear programming. Results show that neural networks are comparable to, if not better than, these other methods in terms of classification rates in the training samples but not in the test samples.  相似文献   

9.
《Omega》2005,33(5):435-450
Lot streaming is a technique used to split a processing batch into several transfer batches. In this way, overlapping operations can be performed in different manufacturing stages, and production can be accelerated. This paper proposes two cost models for solving lot streaming problems in a multistage flow shop. The purpose is to determine the optimal processing batch size and the optimal number of transfer batches that minimize the total annual cost in each model. In the first model, a more complete and accurate method is developed to compute the costs of raw materials, work-in-process, and finished-product inventories. The total cost includes the setup cost, the transfer batch movement cost, the three-type inventory holding cost, and the finished-product shipment cost. The second model contains not only the four costs in the first model, but also the imputed cost associated with the makespan time. The total annual cost functions in both models are shown to be convex, and two solution approaches are suggested. An experiment consisting of three phases was conducted to explore the effect on the optimal solution when changing the value of one parameter at a time. The results indicate that three parameters have significant effects on the optimal solution.  相似文献   

10.
This paper presents point and interval estimators of both long-run and single-period target quantities in a simple cost-volume-profit (C-V-P) model. This model is a stochastic version of the “accountant's break-even chart” where the major component is a semivariable cost function. Although these features suggest obvious possibilities for practical application, a major purpose of this paper is to examine the statistical properties of target quantity estimators in C-V-P analysis. It is shown that point estimators of target quantity are biased and possess no moments of positive order, but are consistent. These properties are also shared by previous break-even models, even when all parameters are assumed known with certainty. After a test for positive variable margins, Fieller's [6] method is used to obtain interval estimators of relevant target quantities. This procedure therefore minimizes possible ambiguities in stochastic break-even analysis (noted by Ekern [3]).  相似文献   

11.
Paul A. Rubin 《决策科学》1991,22(3):519-535
Linear programming discriminant analysis (LPDA) models are designed around a variety of objective functions, each representing a different measure of separation of the training samples by the resulting discriminant function. A separation failure is defined to be the selection of an “optimal” discriminant function which incompletely separates a pair of completely separable training samples. Occurrence of a separation failure suggests that the chosen discriminant function may have an unnecessarily low classification accuracy on the actual populations involved. In this paper, a number of the LPDA models proposed for the two-group case are examined to learn which are subject to separation failure. It appears that separation failure in any model can be avoided by applying the model twice, reversing group designations.  相似文献   

12.
This study presents a new robust estimation method that can produce a regression median hyper-plane for any data set. The robust method starts with dual variables obtained by least absolute value estimation. It then utilizes two specially designed goal programming models to obtain regression median estimators that are less sensitive to a small sample size and a skewed error distribution than least absolute value estimators. The superiority of new robust estimators over least absolute value estimators is confirmed by two illustrative data sets and a Monte Carlo simulation study.  相似文献   

13.
A recent paper by Ferrier and Buzby provides a framework for selecting the sample size when testing a lot of beef trim for Escherichia coli O157:H7 that equates the averted costs of recalls and health damages from contaminated meats sold to consumers with the increased costs of testing while allowing for uncertainty about the underlying prevalence of contamination. Ferrier and Buzby conclude that the optimal sample size is larger than the current sample size. However, Ferrier and Buzby's optimization model has a number of errors, and their simulations failed to consider available evidence about the likelihood of the scenarios explored under the model. After correctly modeling microbial prevalence as dependent on portion size and selecting model inputs based on available evidence, the model suggests that the optimal sample size is zero under most plausible scenarios. It does not follow, however, that sampling beef trim for E. coli O157:H7, or food safety sampling more generally, should be abandoned. Sampling is not generally cost effective as a direct consumer safety control measure due to the extremely large sample sizes required to provide a high degree of confidence of detecting very low acceptable defect levels. Food safety verification sampling creates economic incentives for food producing firms to develop, implement, and maintain effective control measures that limit the probability and degree of noncompliance with regulatory limits or private contract specifications.  相似文献   

14.
The economically optimal sample size in a food safety test balances the marginal costs and marginal benefits of increasing the sample size. We provide a method for selecting the sample size when testing beef trim for Escherichia coli O157:H7 that equates the averted costs of recalls and health damages from contaminated meats sold to consumers with the increased costs of testing while allowing for uncertainty about the underlying prevalence rates of contamination. Using simulations, we show that, in most cases, the optimal sample size is larger than the current sample size of 60 and, in some cases, it exceeds 120. Moreover, lots with a lower prevalence rate have a higher expected damage because contamination is more difficult to detect. Our simulations indicate that these lots have a higher optimal sampling rate.  相似文献   

15.
GARCH models are commonly used as latent processes in econometrics, financial economics, and macroeconomics. Yet no exact likelihood analysis of these models has been provided so far. In this paper we outline the issues and suggest a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a Bayesian solution in O(T) computational operations, where T denotes the sample size. We assess the performance of our proposed algorithm in the context of both artificial examples and an empirical application to 26 UK sectorial stock returns, and compare it to existing approximate solutions.  相似文献   

16.
Dan B Rinks 《Omega》1985,13(3):181-190
A forward looking with backward recourse production planning heuristic is developed using marginal analysis. In the search for a minimum cost solution, a set of rules derived by Kunreuther and Morton for determining planning horizons is employed. It is shown that the logic of the heuristic is similar to several dynamic lot-sizing models. The marginal analysis production planning (MAPP) heuristic is computationally more efficient than optimizing approaches and gives results that are generally less than 5% more expensive than the optimal solution. In addition, through the notion of level periods, the heuristic permits the user to easily investigate strategies where the work force size and daily production rate remain constant for a specified number of periods.  相似文献   

17.
Abstract

Abstract. It has been empirically observed that productivity improves as production continues due to system 'learning’, but that it deteriorates once the activity is stopped due to system 'forgetting’. Both learning and forgetting follow an exponential form with a 'doubling factor’ ranging between 0.75 and 0.98. We review and critique two previously proposed models, correct some minor errors in them, and expand one of them to accommodate a finite horizon. We also propose a new model that is more in harmony with the established learning function, for the determination of the optimal number and size of the lots in the finite and infinite horizon. The methodology used throughout is dynamic programming. We investigate the impact of all three models on the optimal lot sires and their costs, and establish the functional relations between the total cost and the various factors affecting them.  相似文献   

18.
In a recent article, Chatterjee and Greenwood [1] addressed the problem of multicollinearity in polynomial regression models. They noted that there is a high correlation between X and X2; therefore, a second-order polynomial model suffers the consequences of collinearity. Chatterjee and Greenwood [1] suggested a method they believe will overcome the problem. The contention of the present comment is that the suggested method accomplishes nothing and, indeed, has the potential to lead the unwary researcher to the wrong inference and misinterpretation of his results.  相似文献   

19.
Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously‐distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super‐consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross‐section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995).  相似文献   

20.
We study a minimum total commitment (MTC) contract embedded in a finite‐horizon periodic‐review inventory system. Under this contract, the buyer commits to purchase a minimum quantity of a single product from the supplier over the entire planning horizon. We consider nonstationary demand and per‐unit cost, discount factor, and nonzero setup cost. Because the formulations used in existing literature are unable to handle our setting, we develop a new formulation based on a state transformation technique using unsold commitment instead of unbought commitment as state variable. We first revisit the zero setup cost case and show that the optimal ordering policy is an unsold‐commitment‐dependent base‐stock policy. We also provide a simpler proof of the optimality of the dual base‐stock policy. We then study the nonzero setup cost case and prove a new result, that the optimal solution is an unsold‐commitment‐dependent (sS) policy. We further propose two heuristic policies, which numerical tests show to perform very well. We also discuss two extensions to show the generality of our method's effectiveness. Finally, we use our results to examine the effect of different contract terms such as duration, lead time, and commitment on buyer's cost. We also compare total supply chain profits under periodic commitment, MTC, and no commitment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号