首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Tolerance limits are those limits that contain a certain proportion of the distribution of a characteristic with a given probability. 'They are used to make sure that the production will not be outside of specifications' (Amin & Lee, 1999). Usually, tolerance limits are constructed at the beginning of the monitoring of the process. Since they are calculated just one time, these tolerance limits cannot reflect changes of tolerance level over the lifetime of the process. This research proposes an algorithm to construct tolerance limits continuously over time for any given distribution. This algorithm makes use of the exponentially weighted moving average (EWMA) technique. It can be observed that the sample size required by this method is reduced over time.  相似文献   

2.

Amin et al. (1999) developed an exponentially weighted moving average (EWMA) control chart, based on the smallest and largest observations in each sample. The resulting plot of the extremes suggests that the MaxMin EWMA may also be viewed as smoothed tolerance limits. Tolerance limits are limits that include a specific proportion of the population at a given confidence level. In the context of process control, they are used to make sure that production will not be outside specifications. Amin and Li (2000) provided the coverages of the MaxMin EWMA tolerance limits for independent data. In this article, it is shown how autocorrelation affects the confidence level of MaxMin tolerance limits, for a specified level of coverage of the population, and modified smoothed tolerance limits are suggested for autocorrelated processes.  相似文献   

3.
In this article, the Food and Drug Administration's (FDA) medical device substantial equivalence provision will be briefly introduced and some statistical methods useful for evaluating device equivalence are discussed. A sample size formula is derived for limits of agreement, which may be used to assess statistical equivalence in certain medical device situations. Simulation findings on the formula are presented, which can be used to guide sample size selections in common practical situations. Examples of the sample size procedure are given.  相似文献   

4.
The results of Hoeffding (1956), Pledger and Proschan (1971), Gleser (1975) and Boland and Proschan (1983) are used to obtain Buehler (1957) 1-α lower confidence limits for the reliability of k of n systems of independent components when the subsystem data have equal sample sizes and the observed failures satisfy certain conditions. To the best of our knowledge, for k ≠ 1 or n, this is the first time the exact optimal lower confidence limits for system reliability have been given. The observed failure vectors are a generalization of key test results for k of n systems, k ≠ n (Soms (1984) and Winterbottom (1974)). Two examples applying the above theory are also given.  相似文献   

5.
6.
Control charts have been popularly used as a user-friendly yet technically sophisticated tool to monitor whether a process is in statistical control or not. These charts are basically constructed under the normality assumption. But in many practical situations in real life this normality assumption may be violated. One such non-normal situation is to monitor the process variability from a skewed parent distribution where we propose the use of a Maxwell control chart. We introduce a pivotal quantity for the scale parameter of the Maxwell distribution which follows a gamma distribution. Probability limits and L-sigma limits are studied along with performance measure based on average run length and power curve. To avoid the complexity of future calculations for practitioners, factors for constructing control chart for monitoring the Maxwell parameter are given for different sample sizes and for different false alarm rate. We also provide simulated data to illustrate the Maxwell control chart. Finally, a real life example has been given to show the importance of such a control chart.  相似文献   

7.
In this paper, variables repetitive group sampling plans are developed based on the process capability index C pk when the quality characteristic follows a normal distribution with unknown mean and variance. The sampling plan parameters such as the sample size and the acceptance constant are determined to minimize the average sample number. Symmetric and asymmetric cases, in percent defectives due to two specification limits, are dealt with for specified combinations of acceptable quality level and limiting quality level. Tables are provided and examples are given to use proposed plans in practice.  相似文献   

8.
A methodology is developed for estimating consumer acceptance limits on a sensory attribute of a manufactured product. In concept these limits are analogous to engineering tolerances. The method is based on a generalization of Stevens' Power Law. This generalized law is expressed as a nonlinear statistical model. Instead of restricting the analysis to this particular case, a strategy is discussed for evaluating nonlinear models in general since scientific models are frequently of nonlinear form. The strategy focuses on understanding the geometrical contrasts between linear and nonlinear model estimation and assessing the bias in estimation and the departures from a Gaussian sampling distribution. Computer simulation is employed to examine the behavior of nonlinear least squares estimation. In addition to the usual Gaussian assumption, a bootstrap sample reuse procedure and a general triangular distribution are introduced for evaluating the effects of a non-Gaussian or asymmetrical error structure. Recommendations are given for further model analysis based on the simulation results. In the case of a model for which estimation bias is not a serious issue, estimating functions of the model are considered. Application of these functions to the generalization of Stevens’ Power Law leads to a means for defining and estimating consumer acceptance limits, The statistical form of the law and the model evaluation strategy are applied to consumer research data. Estimation of consumer acceptance limits is illustrated and discussed.  相似文献   

9.
Harley (1954) gave asymptotic expansions for the distributio function and for percentiles of the distribution of the bivariate normal sample correlation coefficient. To the stated order of approximation these expansions were incomplete in that contributions from some higher cumulants were not taken into account. In this article the completed expansions are given together with an asymptotic expansion yielding approximate confidence limits for the population correlation coefficient. Numerical comparisons indicate that asymptotic expansions are superior to other suggested approximate methods  相似文献   

10.

Tolerance limits are limits that include a specified proportion of the population at a given confidence level. They are used to make sure that the production will not be outside specifications. Tolerance limits are either designed based on the normality assumption, or nonparametric tolerance limits are established. In either case, no provision for autocorrelated processes is made in the available design tables of tolerance limits. It is shown how to construct tolerance limits to cover a specified proportion of the population when autocorrelation is present in the process. A comparison of four different tolerance limits is provided, and recommendations are given for choosing the "best" estimator of the process variability for the construction of tolerance limits.  相似文献   

11.
A location sensitive one—sample test V(n,n) similar to the Wilcoxon two—sample test was proposed by Riedwyl (1967) and stuided by Carnal and Riedwyl (1972). A generalization for grouped data was given by Maag, Streit and Drouilly (1973). In the present paper we discuss the application of the test for grouped data. We present a table of the significance limits and discuss the approximation by means of the normal distribution.  相似文献   

12.
A nonparametric Shewhart-type control chart is proposed for monitoring the location of a continuous variable in a Phase I process control setting. The chart is based on the pooled median of the available Phase I samples and the charting statistics are the counts (number of observations) in each sample that are less than the pooled median. An exact expression for the false alarm probability (FAP) is given in terms of the multivariate hypergeometric distribution and this is used to provide tables for the control limits for a specified nominal FAP value (of 0.01, 0.05 and 0.10, respectively) and for some values of the sample size (n) and the number of Phase I samples (m). Some approximations are discussed in terms of the univariate hypergeometric and the normal distributions. A simulation study shows that the proposed chart performs as well as, and in some cases better than, an existing Shewhart-type chart based on the normal distribution. Numerical examples are given to demonstrate the implementation of the new chart.  相似文献   

13.
Assume that a sample is available from a population having an exponential distribution, and that l Future sample are to be taken from the same population. This paper provides a formula for the same population. This paper provides a formula for computing a one–sided lower simulataneous prediction limit which is to be below the (ki ? mi + 1) –st order statistics of a future sample of size ki for the i = 1,…,2, hased on the sample mean of a past sample. Tables for factors for one–sided lower simultaneous predicition limits are provided. Such limits are of practical importance in determining acceptance criteria and predicting system survival times.  相似文献   

14.
This paper develops a Bayesian control chart for the percentiles of the Weibull distribution, when both its in‐control and out‐of‐control parameters are unknown. The Bayesian approach enhances parameter estimates for small sample sizes that occur when monitoring rare events such as in high‐reliability applications. The chart monitors the parameters of the Weibull distribution directly, instead of transforming the data as most Weibull‐based charts do in order to meet normality assumption. The chart uses accumulated knowledge resulting from the likelihood of the current sample combined with the information given by both the initial prior knowledge and all the past samples. The chart is adapting because its control limits change (e.g. narrow) during Phase I. An example is presented and good average run length properties are demonstrated.  相似文献   

15.
It is often of interest in survival analysis to test whether the distribution of lifetimes from which the sample under study was derived is the same as a reference distribution. The latter can be specified on the basis of previous studies or on subject matter considerations. In this paper several tests are developed for the above hypothesis, suitable for right-censored observations. The tests are based on modifications of Moses' one-sample limits of some classical two-sample rank tests. The asymptotic distributions of the test statistics are derived, consistency is established for alternatives which are stochastically ordered with respect to the null, and Pitman asymptotic efficiencies are calculated relative to competing tests. Simulated power comparisons are reported. An example is given with data on the survival times of lung cancer patients.  相似文献   

16.
With the advent of modern technology, manufacturing processes have become very sophisticated; a single quality characteristic can no longer reflect a product's quality. In order to establish performance measures for evaluating the capability of a multivariate manufacturing process, several new multivariate capability (NMC) indices, such as NMC p and NMC pm , have been developed over the past few years. However, the sample size determination for multivariate process capability indices has not been thoroughly considered in previous studies. Generally, the larger the sample size, the more accurate an estimation will be. However, too large a sample size may result in excessive costs. Hence, the trade-off between sample size and precision in estimation is a critical issue. In this paper, the lower confidence limits of NMC p and NMC pm indices are used to determine the appropriate sample size. Moreover, a procedure for conducting the multivariate process capability study is provided. Finally, two numerical examples are given to demonstrate that the proper determination of sample size for multivariate process indices can achieve a good balance between sampling costs and estimation precision.  相似文献   

17.
This article describes a method for partitioning with respect to a control for the situation in which the treatment sample sizes are unequal and also for the situation where the treatment sample sizes are equal except for a few missing values. Calculation of the critical values required for finding confidence limits is discussed and tables are presented for the “almost equal” sample size case. An application of this method to length of stay data for congestive heart failure patients is also provided.  相似文献   

18.
The nonparametric density function estimation using sample observations which are contaminated with random noise is studied. The particular form of contamination under consideration is Y = X + Z, where Y is an observable random variableZ is a random noise variable with known distribution, and X is an absolutely continuous random variable which cannot be observed directly. The finite sample size performance of a strongly consistent estimator for the density function of the random variable X is illustrated for different distributions. The estimator uses Fourier and kernel function estimation techniques and allows the user to choose constants which relate to bandwidth windows and limits on integration and which greatly affect the appearance and properties of the estimates. Numerical techniques for computation of the estimated densities and for optimal selection of the constant are given.  相似文献   

19.
The power function distribution is often used to study the electrical component reliability. In this paper, we model a heterogeneous population using the two-component mixture of the power function distribution. A comprehensive simulation scheme including a large number of parameter points is followed to highlight the properties and behavior of the estimates in terms of sample size, censoring rate, parameters size and the proportion of the components of the mixture. The parameters of the power function mixture are estimated and compared using the Bayes estimates. A simulated mixture data with censored observations is generated by probabilistic mixing for the computational purposes. Elegant closed form expressions for the Bayes estimators and their variances are derived for the censored sample as well as for the complete sample. Some interesting comparison and properties of the estimates are observed and presented. The system of three non-linear equations, required to be solved iteratively for the computations of maximum likelihood (ML) estimates, is derived. The complete sample expressions for the ML estimates and for their variances are also given. The components of the information matrix are constructed as well. Uninformative as well as informative priors are assumed for the derivation of the Bayes estimators. A real-life mixture data example has also been discussed. The posterior predictive distribution with the informative Gamma prior is derived, and the equations required to find the lower and upper limits of the predictive intervals are constructed. The Bayes estimates are evaluated under the squared error loss function.  相似文献   

20.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号