首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of adjusting a machine that manufactures parts in batches or lots and experiences random offsets or shifts whenever a set-up operation takes place between lots. The existing procedures for adjusting set-up errors in a production process over a set of lots are based on the assumption of known process parameters. In practice, these parameters are usually unknown, especially in short-run production. Due to this lack of knowledge, adjustment procedures such as Grubbs' (1954, 1983) rules and discrete integral controllers (also called EWMA controllers) aimed at adjusting for the initial offset in each single lot, are typically used. This paper presents an approach for adjusting the initial machine offset over a set of lots when the process parameters are unknown and are iteratively estimated using Markov Chain Monte Carlo (MCMC). As each observation becomes available, a Gibbs Sampler is run to estimate the parameters of a hierarchical normal means model given the observations up to that point in time. The current lot mean estimate is then used for adjustment. If used over a series of lots, the proposed method allows one eventually to start adjusting the offset before producing the first part in each lot. The method is illustrated with application to two examples reported in the literature. It is shown how the proposed MCMC adjusting procedure can outperform existing rules based on a quadratic off-target criterion.  相似文献   

2.
An important problem in process adjustment using feedback is how often to sample the process and when and by how much to apply an adjustment. Minimum cost feedback schemes based on simple, but practically interesting, models for disturbances and dynamics have been discussed in several particular cases. The more general situation in which there may be measurement and adjustment errors, deterministic process drift, and costs of taking an observation, of making an adjustment, and of being off target, is considered in this article. Assuming all these costs to be known, a numerical method to minimize the overall expected cost is presented. This numerical method provides the optimal sampling interval, action limits, and amount of adjustment; and the resulting average adjustment interval, mean squared deviation from target, and minimum overall expected cost. When the costs of taking an observation, of making an adjustment, and of being off target are not known, the method can be used to choose a particular scheme by judging the advantages and disadvantages of alternative options considering the mean squared deviation they produce, the frequency with which they require observations to be made, and the resulting overall length of time between adjustments. Computer codes that perform the required computations are provided in the appendices and applied to find optimal adjustment schemes in three real examples of application.  相似文献   

3.
In a discrete-part manufacturing process, the noise is often described by an IMA(1,1) process and the pure unit delay transfer function is used as the feedback controller to adjust it. The optimal controller for this process is the well-known minimum mean square error (MMSE) controller. The starting level of the IMA(1,1) model is assumed to be on target when it starts. Considering such an impractical assumption, we adopt the starting offset. Since the starting offset is not observable, the MMSE controller does not exist. An alternative to the MMSE controller is the minimum asymptotic mean square error controller, which makes the long-run mean square error minimum.Another concern in this article is the un-stability of the controller, which may produce high adjustment costs and/or may exceed the physical bounds of the process adjustment. These practical barriers will prevent the controller to adjust the process properly. To avoid this dilemma, a resetting design is proposed. That is, the resetting procedure in use of the controller is to adjust the process according to the controller when it remains within the reset limit, and to reset the process, otherwise.The total cost for the manufacturing process is affected by the off-target cost, the adjustment cost, and the reset cost. Proper values for the reset limit are selected to minimize the average cost per reset interval (ACR) considering various process parameters and cost parameters. A time non-homogeneous Markov chain approach is used for calculating the ACR. The effect of adopting the starting offset is also studied here.  相似文献   

4.
The determination of a stopping rule for the detection of the time of an increase in the success probability of a sequence of independent Bernoulli trials is discussed. Both success probabilities are assumed unknown. A Bayesian approach is applied; the distribution of the location of the shift in the success probability is assumed geometric and the success probabilities are assumed to have known joint prior distribution. The costs involved are penalties for late or early stoppings. The nature of the optimal dynamic programming solution is discussed and a procedure for obtaining a suboptimal stopping rule is determined. The results indicate that the detection procedure is quite effective.  相似文献   

5.
Selecting an optimal 2k?pfractional factorial is structured as a mathematical programming problem. An algorithm is defined for the solution, and the case of additive costs is shown to have a known solution for resolution III designs.  相似文献   

6.
Abstract

The setup adjustment problem occurs when a machine experiences an upset at setup that needs to be compensated for. In this article, feedback methods for the setup adjustment problem are studied from a small-sample point of view, relevant in modern manufacturing. Sequential adjustment rules due to Grubbs (Grubbs, F. E. (1954 Grubbs, F. E. 1954. An optimum procedure for setting machines or adjusting processes. Industrial Quality Control July,  [Google Scholar]). An optimum procedure for setting machines or adjusting processes. Industrial Quality Control 07) and an integral controller are considered. The performance criteria is the quadratic off-target cost incurred over a small number of parts produced. Analytical formulae are presented and numerically illustrated. Two cases are considered, the first one where the setup error is a constant but unknown offset and the second one where the setup error is a random variable with unknown first two moments. These cases are studied under the assumption that no further shifts occur after setup. It is shown how Grubbs' harmonic rule and a simple integral controller provide a robust adjustment strategy in a variety of circumstances. As a by-product, the formulae presented in this article allow to compute the expected off-target quadratic cost when a sudden shift occurs during production (not necessarily at setup) and the adjustment scheme compensates immediately after its occurrence.  相似文献   

7.
In this paper, an optimization model is developed for the economic design of a rectifying inspection sampling plan in the presence of two markets. A product with a normally distributed quality characteristic with unknown mean and variance is produced in the process. The quality characteristic has a lower specification limit. The aim of this paper is to maximize the profit, which consists the Taguchi loss function, under the constraints of satisfying the producer's and consumer's risk in two different markets simultaneously. Giveaway cost per unit of sold excess material is considered in the proposed model. A case study is presented to illustrate the application of proposed methodology. In addition, sensitivity analysis is performed to study the effect of model parameters on the expected profit and optimal solution. Optimal process adjustment problem and acceptance sampling plan is combined in the economical optimization model. Also, process mean and standard deviation are assumed to be unknown value, and their impact is analyzed. Finally, inspection error is considered, and its impact is investigated and analyzed.  相似文献   

8.
The case of nonresponse in multivariate stratified sampling survey was first introduced by Hansen and Hurwitz in 1946 considering the sampling variances and costs to be deterministic. However, in real life situations sampling variance and cost are often random (stochastic) and have probability distributions. In this article, we have formulated the multivariate stratified sampling in the presence of nonresponse with random sampling variances and costs as a multiobjective stochastic programming problem. Here, the sampling variance and costs are considered random and converted into a deterministic NLPP by using chance constraint and modified E-model. A solution procedure using three different approaches are adopted viz. goal programming, fuzzy programming, and D1 distance method to obtain the compromise allocation for the formulated problem. An empirical study has also been provided to illustrate the computational details.  相似文献   

9.
Abstract

A joint adjustment involves integrating different types of geodetic datasets, or multiple datasets of the same data type, into a single adjustment. This paper applies the weighted total least-squares (WTLS) principle to joint adjustment problems and proposes an iterative algorithm for WTLS joint (WTLS-J) adjustment with weight correction factors. Weight correction factors are used to rescale the weight matrix of each dataset while using the Helmert variance component estimation (VCE) method to estimate the variance components, since the variance components in the stochastic model are unknown. An affine transformation example is illustrated to verify the practical benefit and the relative computational efficiency of the proposed algorithm. It is shown that the proposed algorithm obtains the same parameter estimates as the Amiri-Simkooei algorithm in our example; however, the proposed algorithm has its own computational advantages, especially when the number of data points is large.  相似文献   

10.
The classical change point problem is considered, from the invariance point of view. Locally optimal invariant tests are derived for the change in level, when the initial level and the common variance are assumed to be unknown. The tests derived by Chernoff and Zacks (1964) and Gardner (1969), for the change in level, when variance is known, are shown to be locally optimal invariant tests.  相似文献   

11.
This article proposes a new mixed chain sampling plan based on the process capability index Cpk, where the quality characteristic of interest follows the normal distribution with unknown mean and variance. The advantages of this proposed mixed sampling plan are also discussed. Tables are constructed to determine the optimal parameters for practical applications. In order to construct the tables, the problem is formulated as a nonlinear programming where the objective function to be minimized is the average sample number and the constraints are related to lot acceptance probabilities at acceptable quality level and limiting quality level under the operating characteristic curve. The practical application of the proposed mixed sampling plan is explained with an illustrative example. Comparison of the proposed sampling plan is also made with other existing sampling plans.  相似文献   

12.
The effect of rejecting a two-sided preliminary test of significance for the mean of a normal distribution upon subsequent interval estimation of the mean is examined. For the case where the variance is known, conditional confidence intervals may be shorter than unconditional intervals, in contrast to the one-sided preliminary test case examined by Meeks and D’Agostino (1983, The American Statistician, 7, 134-136) . For the case where the variance is unknown and must be estimated by the sample variance, it is shown that customary intervals do not offer uniformly greater or lesser coverage than the nominal level.  相似文献   

13.
In this paper, we derive explicit computable expressions for the asymptotic distribution of the maximum likelihood estimate of an unknown change-point in a sequence of independently and exponentially distributed random variables. First we state and prove a theorem that shows asymptotic equivalence of the change-point mle for the cases of both known and unknown parameters, respectively. Thereafter, the computational form of the asymptotic distribution of the change-point mle is derived for the case of known parameter situation only. Simulations show that the distribution for the known case applies very well to the case where the parameters are estimated. Further, it is seen from simulations that the derived unconditional mle shows better performance compared to the conditional solution of Cobb. Application of change detection methodology and the derived estimation methodology show strong support in favor the dynamic triggering hypothesis for seismic faults in Sumatra, Indonesia region.  相似文献   

14.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   

15.
A method is proposed for shape-constrained density estimation under a variety of constraints, including but not limited to unimodality, monotonicity, symmetry, and constraints on the number of inflection points of the density or its derivative. The method involves computing an adjustment curve that is used to bring a pre-existing pilot estimate into conformance with the specified shape restrictions. The pilot estimate may be obtained using any preferred estimator, and the optimal adjustment can be computed using fast, readily-available quadratic programming routines. This makes the proposed procedure generic and easy to implement.  相似文献   

16.
The solution of the Kolmogorov backward equation is expressed as a functional integral by means of the Feynman–Kac formula. The expectation value is approximated as a mean over trajectories. In order to reduce the variance of the estimate, importance sampling is utilized. From the optimal importance density, a modified drift function is derived which is used to simulate optimal trajectories from an Itô equation. The method is applied to option pricing and the simulation of transition densities and likelihoods for diffusion processes. The results are compared to known exact solutions and results obtained by numerical integration of the path integral using Euler transition kernels. The importance sampling leads to strong variance reduction, even if the unknown solution appearing in the drift is replaced by known reference solutions. In models with low-dimensional state space, the numerical integration method is more efficient, but in higher dimensions it soon becomes infeasible, whereas the Monte Carlo method still works.  相似文献   

17.
This article considers a class of estimators for the location and scale parameters in the location-scale model based on ‘synthetic data’ when the observations are randomly censored on the right. The asymptotic normality of the estimators is established using counting process and martingale techniques when the censoring distribution is known and unknown, respectively. In the case when the censoring distribution is known, we show that the asymptotic variances of this class of estimators depend on the data transformation and have a lower bound which is not achievable by this class of estimators. However, in the case that the censoring distribution is unknown and estimated by the Kaplan–Meier estimator, this class of estimators has the same asymptotic variance and attains the lower bound for variance for the case of known censoring distribution. This is different from censored regression analysis, where asymptotic variances depend on the data transformation. Our method has three valuable advantages over the method of maximum likelihood estimation. First, our estimators are available in a closed form and do not require an iterative algorithm. Second, simulation studies show that our estimators being moment-based are comparable to maximum likelihood estimators and outperform them when sample size is small and censoring rate is high. Third, our estimators are more robust to model misspecification than maximum likelihood estimators. Therefore, our method can serve as a competitive alternative to the method of maximum likelihood in estimation for location-scale models with censored data. A numerical example is presented to illustrate the proposed method.  相似文献   

18.
This paper deals with Bartlett-type adjustment which makes all the terms up to order nk in the asymptotic expansion vanish, where k is an integer k ⩾ 1 and n depends on the sample size. Extending Cordeiro and Ferrari (1991, Biometrika, 78, 573–582) for the case of k = 1, we derive a general formula of the kth-order Bartlett-type adjustment for the test statistic whose kth-order asymptotic expansion of the distribution is given by a finite linear combination of chi-squared distribution with suitable degrees of freedom. Two examples of the second-order Bartlett-type adjustment are given. We also elucidate the connection between Bartlett-type adjustment and Cornish-Fisher expansion.  相似文献   

19.
In the case where non-experimental data are available from an industrial process and a directed graph for how various factors affect a response variable is known based on a substantive understanding of the process, we consider a problem in which a control plan involving multiple treatment variables is conducted in order to bring a response variable close to a target value with variation reduction. Using statistical causal analysis with linear (recursive and non-recursive) structural equation models, we configure an optimal control plan involving multiple treatment variables through causal parameters. Based on the formulation, we clarify the causal mechanism for how the variance of a response variable changes when the control plan is conducted. The results enable us to evaluate the effect of a control plan on the variance of a response variable from non-experimental data and provide a new application of linear structural equation models to engineering science.  相似文献   

20.
Srivastava and Wu and Box and Kramer considered an integrated moving average process of order one with sampling interval for process adjustment. However, the results were obtained by asymptotic methods and simulations respectively. In this paper, these results are obtained analytically. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target-value is assumed to be proportional to the square of the deviations. The long-run average cost is evaluated exactly in terms of moments of the randomly stopped random walk. Two approximations are given and shown by simulation to be close to the exact value One of these approximations is used to obtain an explicit expression for the optimum value of the inspection interval and the control limit where an adjustment is to be made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号