首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents analytical expressions for the average adjustment interval and the mean squared deviation from target of the “bounded adjustment” schemes of Box and Luceno (1997a) under the assumption that the disturbances are generated from a double-exponential distribution. The solutions obtained are very close to those computed numerically for normally distributed innovations. This not only demonstrates the robustness of the schemes to the distributional assumptions, but also provides new useful expressions for the average adjustment interval and mean squared deviation from target. Expressions for the characteristic and probability mass functions of the adjustment interval are also given.  相似文献   

2.
In a discrete-part manufacturing process, the noise is often described by an IMA(1,1) process and the pure unit delay transfer function is used as the feedback controller to adjust it. The optimal controller for this process is the well-known minimum mean square error (MMSE) controller. The starting level of the IMA(1,1) model is assumed to be on target when it starts. Considering such an impractical assumption, we adopt the starting offset. Since the starting offset is not observable, the MMSE controller does not exist. An alternative to the MMSE controller is the minimum asymptotic mean square error controller, which makes the long-run mean square error minimum.Another concern in this article is the un-stability of the controller, which may produce high adjustment costs and/or may exceed the physical bounds of the process adjustment. These practical barriers will prevent the controller to adjust the process properly. To avoid this dilemma, a resetting design is proposed. That is, the resetting procedure in use of the controller is to adjust the process according to the controller when it remains within the reset limit, and to reset the process, otherwise.The total cost for the manufacturing process is affected by the off-target cost, the adjustment cost, and the reset cost. Proper values for the reset limit are selected to minimize the average cost per reset interval (ACR) considering various process parameters and cost parameters. A time non-homogeneous Markov chain approach is used for calculating the ACR. The effect of adopting the starting offset is also studied here.  相似文献   

3.
Many industrial processes must be adjusted from time to time to maintain their mean continuously close to the target value. Compensations for deviations of the process mean from the target may be accomplished by feedback and/or by feedforward adjustment. Feedback adjustments are made in reaction to errors at the output; feedforward adjustments are made to compensate anticipated changes. This article considers the complementary use of feedback and feedforward adjustments to compensate for anticipated step changes in the process mean as may be necessary in a manufacturing process each time a new batch of feedstock material is introduced. We consider and compare five alternative control schemes: (1) feedforward adjustment alone, (2) feedback adjustment alone, (3) feedback- feedforward adjustment, (4) feedback and indirect feedforward to increase the sensitivity of the feedback scheme, and (5) feedback with both direct and indirect feedforward.  相似文献   

4.
In this article, we investigate a control policy for the choice of sampling interval and control limit by minimizing the expected quality cost. The study is based on the environment in which (i) the stochastic disturbances are assumed to follow an IMA(1, 1) process, (ii) there is process dynamics between the input series and the output series, (iii) a feedback control scheme is imposed, and (iv) the expected quality cost contains off-target cost, adjustment cost, and inspection cost. Modeling and forecasting for (i), (ii), and (iii) are performed according to the transfer function plus noise model. Minimizing the expected quality cost for (iv) is carried out by a modified pattern search procedure. An example is given to demonstrate the advantage of using the pattern search method over the usual 3-sigma control scheme. The penalty of ignoring the process dynamics and for the case of choosing incorrect value of θ of an IMA(1, 1) disturbance is discussed. The pattern search method is also compared favorably with the modified Taguchi's method in quality cost for the cases considered therein.  相似文献   

5.
Johnson (1992) developed the process loss index Le, which is defined as the ratio of the expected quadratic loss to the square of half specification width. Tsui (1997) expressed the index Le as Le=Lpe+Lot, which provides an uncontaminated separation between information concerning the potential relative expected loss (Lpe) and the relative off-target squared (Lot), as the ratio of the process variance and the square of the half specification width, and the square of the ratio of the deviation of mean from the target and the half specification width, respectively. In this paper, we consider these three loss function indices, and investigate the statistical properties of their natural estimators. For the three indices, we obtain their UMVUEs and MLEs, and compare the reliability of the two estimators based on the relative mean squared errors. In addition, we construct 90%, 95%, and 99% upper confidence limits, and the maximum values of L^e for which the process is capable, 90%, 95%, and 99% of the time. The results obtained in this paper are useful to the practitioners in choosing good estimators and making reliable decisions on judging process capability.  相似文献   

6.
Srivastava and Wu and Box and Kramer considered an integrated moving average process of order one with sampling interval for process adjustment. However, the results were obtained by asymptotic methods and simulations respectively. In this paper, these results are obtained analytically. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target-value is assumed to be proportional to the square of the deviations. The long-run average cost is evaluated exactly in terms of moments of the randomly stopped random walk. Two approximations are given and shown by simulation to be close to the exact value One of these approximations is used to obtain an explicit expression for the optimum value of the inspection interval and the control limit where an adjustment is to be made.  相似文献   

7.
In some life tests, exact failure times cannot be observed, because of cost or time constraints. Assuming an exponential distribution with mean on the lifetimes, we study the e ects of type I and type II censored sampling schemes on the estimation of . In particular, the Fisher information, the expected duration of the life test and the mean squared error of the maximum likelihood estimators of under the two types of censored sampling scheme are compared. A simulation study is conducted to study the robustness of the estimators.  相似文献   

8.
We consider non‐parametric estimation for interarrival times density of a renewal process. For continuous time observation, a projection estimator in the orthonormal Laguerre basis is built. Nonstandard decompositions lead to bounds on the mean integrated squared error (MISE), from which rates of convergence on Sobolev–Laguerre spaces are deduced, when the length of the observation interval gets large. The more realistic setting of discrete time observation is more difficult to handle. A first strategy consists in neglecting the discretization error. A more precise strategy aims at taking into account the convolution structure of the data. Under a simplifying ‘dead‐zone’ condition, the corresponding MISE is given for any sampling step. In the three cases, an automatic model selection procedure is described and gives the best MISE, up to a logarithmic term. The results are illustrated through a simulation study.  相似文献   

9.
Abstract

The setup adjustment problem occurs when a machine experiences an upset at setup that needs to be compensated for. In this article, feedback methods for the setup adjustment problem are studied from a small-sample point of view, relevant in modern manufacturing. Sequential adjustment rules due to Grubbs (Grubbs, F. E. (1954 Grubbs, F. E. 1954. An optimum procedure for setting machines or adjusting processes. Industrial Quality Control July,  [Google Scholar]). An optimum procedure for setting machines or adjusting processes. Industrial Quality Control 07) and an integral controller are considered. The performance criteria is the quadratic off-target cost incurred over a small number of parts produced. Analytical formulae are presented and numerically illustrated. Two cases are considered, the first one where the setup error is a constant but unknown offset and the second one where the setup error is a random variable with unknown first two moments. These cases are studied under the assumption that no further shifts occur after setup. It is shown how Grubbs' harmonic rule and a simple integral controller provide a robust adjustment strategy in a variety of circumstances. As a by-product, the formulae presented in this article allow to compute the expected off-target quadratic cost when a sudden shift occurs during production (not necessarily at setup) and the adjustment scheme compensates immediately after its occurrence.  相似文献   

10.
Evaluating and comparing process capabilities are important tasks of production management. Manufacturers should apply the process with the highest capability among competing processes. A process group selection method is developed to solve the process selection problem based on overall yields. The goal is to select the processes with the highest overall yield among I processes under multiple quality characteristics, I > 2. The proposed method uses Bonferroni adjustment to control the overall error rate of comparing multiple processes. The critical values and the required sample sizes for designated powers are provided for practical use.  相似文献   

11.
Srivastava and Wu (1997) considered a random walk model with sampling interval and measurement error which was assumed to be white noise. In this paper, we consider the situation in which the measurement error is also a random walk. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target value is assumed to be proportional to the square of the deviations. The long-run average cost rate is evaluated exactly in terms of the first four moments of a randomly stopped random walk. Using approximations of those moments, optimum, values of the control parameters are given.  相似文献   

12.
The rapid response to the requirements of customers and markets promotes the concurrent engineering (CE) technique in product and process design. The decision making for process quality target, SPC method, sampling plan, and control chart parameter design can be done at the stage of process quality plan based on historical data and process knowledge database. Therefore, it is a reasonable trend to introduce the concepts and achievements on process quality evaluation and process capability analysis, CE, and SPC techniques into process plan and tolerance design. A new systematic method for concurrent design of process quality, statistical tolerance (ST), and control chart is presented based on a NSFC research program. A set of standardized process quality indices (PQIs) for variables is introduced for meeting the measurement and evaluation to process yield, process centering, and quality loss. This index system that has relatively strong compatibility and adaptability is based on raisonne grading by using the series of preferred numbers and arithmetical progression. The expected process quality based on this system can be assured by a standardized interface between PQIs and SPC, that is, quality-oriented statistical tolerance zone. A quality-oriented ST and SPC approach that quantitatively specifies what a desired process is and how to assure it will realize the optimal control for a process toward a predetermined quality target.  相似文献   

13.
Rahim and Banerjee considered a constant integral of the hazard function for all sampling intervals. This led the sampling intervals to depend on the extended first sampling interval (h1). Since this limitation might not lead to an optimal situation, we first showed that elimination of the mentioned restriction did not cause any significant change in the average quality cycle cost. So if one is looking for an ideal cost and the simplicity of the process, the approach taken in Rahim and Banerjee’s study is the best procedure to adopt. Moreover, in many cases of non-uniform sampling method the first sampling interval becomes so large and this can sometimes lead the production system to the out-of-control state due to unexpected failures that might happen during that time. Therefore, we proposed a new model of uniform and non-uniform sampling intervals combination that allows us to confine the value of h1 without undergoing high costs. The proposed model showed that the quality cycle cost of the proposed model is lower than Rahim and Banerjee’s model in the economic-statistical state. For more illustration, we conducted sensitivity analysis and gave numerical examples.  相似文献   

14.
A risk-efficient sequential point estimator is considered for the ratio of two independent binomial proportions based on maximum likelihood estimation under squared error loss and cost proportional to the observations. It is assumed that the cost per observation is constant. First-order asymptotic expansions are obtained for large-sample properties of the proposed procedure. Performance of the procedure is studied through the criteria of risk efficiency and regret analysis. Monte Carlo simulation is carried out to obtain the expected sample size that minimizes the risk and to examine its finite sample behavior. An example is provided to illustrate its use.  相似文献   

15.
In this paper, an optimization model is developed for the economic design of a rectifying inspection sampling plan in the presence of two markets. A product with a normally distributed quality characteristic with unknown mean and variance is produced in the process. The quality characteristic has a lower specification limit. The aim of this paper is to maximize the profit, which consists the Taguchi loss function, under the constraints of satisfying the producer's and consumer's risk in two different markets simultaneously. Giveaway cost per unit of sold excess material is considered in the proposed model. A case study is presented to illustrate the application of proposed methodology. In addition, sensitivity analysis is performed to study the effect of model parameters on the expected profit and optimal solution. Optimal process adjustment problem and acceptance sampling plan is combined in the economical optimization model. Also, process mean and standard deviation are assumed to be unknown value, and their impact is analyzed. Finally, inspection error is considered, and its impact is investigated and analyzed.  相似文献   

16.
Recently, there has been a great interest in the analysis of longitudinal data in which the observation process is related to the longitudinal process. In literature, the observation process was commonly regarded as a recurrent event process. Sometimes some observation duration may occur and this process is referred to as a recurrent episode process. The medical cost related to hospitalization is an example. We propose a conditional modeling approach that takes into account both informative observation process and observation duration. We conducted simulation studies to assess the performance of the method and applied it to a dataset of medical costs.  相似文献   

17.
A new method is proposed for measuring the distance between a training data set and a single, new observation. The novel distance measure reflects the expected squared prediction error when a quantitative response variable is predicted on the basis of the training data set using the distance weighted k-nearest-neighbor method. The simulation presented here shows that the distance measure correlates well with the true expected squared prediction error in practice. The distance measure can be applied, for example, in assessing the uncertainty of prediction.  相似文献   

18.
We present schemes for the allocation of subjects to treatment groups, in the presence of prognostic factors. The allocations are robust against incorrectly specified regression responses, and against possible heteroscedasticity. Assignment probabilities which minimize the asymptotic variance are obtained. Under certain conditions these are shown to be minimax (with respect to asymptotic mean squared error) as well. We propose a method of sequentially modifying the associated assignment rule, so as to address both variance and bias in finite samples. The resulting scheme is assessed in a simulation study. We find that, relative to common competitors, the robust allocation schemes can result in significant decreases in the mean squared error when the fitted models are biased, at a minimal cost in efficiency when in fact the fitted models are correct.  相似文献   

19.
Estimating the fibre length distribution in composite materials is of practical relevance in materials science. We propose an estimator for the fibre length distribution using the point process of fibre endpoints as input. Assuming that this point process is a realization of a Neyman–Scott process, we use results for the reduced second moment measure to derive a consistent and unbiased estimator for the fibre length distribution. We introduce various versions of the estimator taking anisotropy or errors in the observation into account. The estimator is evaluated using a heuristic for its mean squared error as well as a simulation study. Finally, the estimator is applied to the fibre endpoint process extracted from a tomographic image of a glass fibre composite.  相似文献   

20.
We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号