首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Measurement error and autocorrelation often exist in quality control applications. Both have an adverse effect on the chart's performance. To counteract the undesired effect of autocorrelation, we build-up the samples with non-neighbouring items, according to the time they were produced. To counteract the undesired effect of measurement error, we measure the quality characteristic of each item of the sample several times. The chart's performance is assessed when multiple measurements are applied and the samples are built by taking one item from the production line and skipping one, two or more before selecting the next.  相似文献   

2.
3.
The memory-type control charts are widely used in the process and service industries for monitoring the production processes. The reason is their sensitivity to quickly react against the small process disturbances. Recently, a new cumulative sum (CUSUM) chart has been proposed that uses the exponentially weighted moving average (EWMA) statistic, called the EWMA–CUSUM chart. Similarly, in order to further enhance the sensitivity of the EWMA–CUSUM chart, we propose a new CUSUM chart using the generally weighted moving average (GWMA) statistic, called the GWMA–CUSUM chart, for efficiently monitoring the process mean. The GWMA–CUSUM chart encompasses the existing CUSUM and EWMA–CUSUM charts. Extensive Monte Carlo simulations are used to explore the run length profiles of the GWMA–CUSUM chart. Based on comprehensive run length comparisons, it turns out that the GWMA–CUSUM chart performs substantially better than the CUSUM, EWMA, GWMA, and EWMA–CUSUM charts when detecting small shifts in the process mean. An illustrative example is also presented to explain the implementation and working of the EWMA–CUSUM and GWMA–CUSUM charts.  相似文献   

4.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   

5.
6.
The exponentially weighted moving average (EWMA) control charts with variable sampling intervals (VSIs) have been shown to be substantially quicker than the fixed sampling intervals (FSI) EWMA control charts in detecting process mean shifts. The usual assumption for designing a control chart is that the data or measurements are normally distributed. However, this assumption may not be true for some processes. In the present paper, the performances of the EWMA and combined –EWMA control charts with VSIs are evaluated under non-normality. It is shown that adding the VSI feature to the EWMA control charts results in very substantial decreases in the expected time to detect shifts in process mean under both normality and non-normality. However, the combined –EWMA chart has its false alarm rate and its detection ability is affected if the process data are not normally distributed.  相似文献   

7.
We consider a novel univariate non parametric cumulative sum (CUSUM) control chart for detecting the small shifts in the mean of a process, where the nominal value of the mean is unknown but some historical data are available. This chart is established based on the Mann–Whitney statistic as well as the change-point model, where any assumption for the underlying distribution of the process is not required. The performance comparisons based on simulations show that the proposed control chart is slightly more effective than some other related non parametric control charts.  相似文献   

8.
Control charts are one of the widest used techniques in statistical process control. In Phase I, historical observations are analysed in order to construct a control chart. Because of the existence of multiple outliers that are undetected by control charts such as Hotelling’s T 2 due to the masking effect, robust alternatives to Hotelling’s T 2 have been developed based on minimum volume ellipsoid (MVE) estimators, minimum covariance determinant (MCD) estimators, reweighted MCD estimators or trimmed estimators. In this paper, we use a simulation study to analyse the performance of each alternative in various situations and offer guidance for the correct use of each estimator.  相似文献   

9.
The process personnel always seek the opportunity to improve the processes. One of the essential steps for process improvement is to quickly recognize the starting time or the change point of a process disturbance. The proposed approach combines the X¯ control chart with the Bayesian estimation technique. We show that the control chart has some information about the change point and this information can be used to make an informative prior. Then two Bayes estimators corresponding to the informative and a non informative prior along with MLE are considered. Their efficiencies are compared through a series of simulations. The results show that the Bayes estimator with the informative prior is more accurate and more precise when the means of the process before and after the change point time are not too closed. In addition, the efficiency of the Bayes estimator with the informative prior increases as the change point goes away from the origin.  相似文献   

10.
A test for the equality of two or more two-parameter exponential distributions is suggested. It is developed on an intuitive basis and is obtained by combining two independent tests by the Fisher method (1950, pp. 99-101). The test is simple for application and is optimal asymptotically in the sense of Bahadur efficiency (1960). A numerical example is discussed to illustrate its application in a real-world situation. The Monte Carlo simulation is used for calculating its power which is compared with that of the test suggested by Singh and Narayan (1983). The suggested test is found oftener more powerful.  相似文献   

11.
It has been recently revealed that the Shewhart control charts with variable sampling interval (VSI) perform better than the traditional Shewhart chart with the fixed sampling interval in detecting shifts in the process. In most of these research works, the normality and independency of the process data or measurements are assumed and that the process is subjected to only one assignable cause. While, in practice, these assumptions usually do not hold, some recent studies are focused on working with only one or two of these violations. In this paper, the situation in which the process data are correlated and follow a non-normal distribution and that there is multiplicity of assignable causes in the process is considered. For this case, a cost model for the economic design of the VSI X? control chart is developed, where the Burr distribution is employed to represent the non-normal distribution of the process data. To obtain the optimal values of the design parameters, a genetic algorithm is employed in which the response surface methodology is applied. A numerical example is presented to show the applicability and effectiveness of the proposed methodology. Sensitivity analysis is also carried out to evaluate the effects of cost and input parameters on the performance of the chart.  相似文献   

12.
Non-normality and heteroscedasticity are common in applications. For the comparison of two samples in the non-parametric Behrens–Fisher problem, different tests have been proposed, but no single test can be recommended for all situations. Here, we propose combining two tests, the Welch t test based on ranks and the Brunner–Munzel test, within a maximum test. Simulation studies indicate that this maximum test, performed as a permutation test, controls the type I error rate and stabilizes the power. That is, it has good power characteristics for a variety of distributions, and also for unbalanced sample sizes. Compared to the single tests, the maximum test shows acceptable type I error control.  相似文献   

13.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

14.
ABSTRACT

The Mack–Wolfe test is the most frequently used non parametric procedure for the umbrella alternative problem. In this paper, modifications of the Mack–Wolfe test are proposed for both known peak and unknown peak umbrellas. The exact mean and variance of the proposed tests in the null hypothesis are also derived. We compare these tests with some of the existing tests in terms of the type I error rate and power. In addition, a real data example is presented.  相似文献   

15.
We show that the Bradley–Blackwood simultaneous test for equal means and equal variances in paired-samples additively decomposes into separate tests of these hypotheses. The test of equal variances in the decomposition is the standard Pitman–Morgan procedure. The test of equal means in the decomposition is based on a t-ratio with (n ? 2) degrees of freedom and has the additional restriction that the variances are equal.  相似文献   

16.
The uniformly asymptotical normality of frequency polygons for ψ-mixing samples is investigated under the given conditions. Moreover, the corresponding rate of convergence is also derived, which is nearly O(n? 1/6) for the given assumptions.  相似文献   

17.
In this paper, we consider the validity of the Jarque–Bera normality test whose construction is based on the residuals, for the innovations of GARCH (generalized autoregressive conditional heteroscedastic) models. It is shown that the asymptotic behavior of the original form of the JB test adopted in this paper is identical to that of the test statistic based on true errors. The simulation study also confirms the validity of the original form since it outperforms other available normality tests.  相似文献   

18.
19.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号