首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
In this paper, a variables tightened-normal-tightened (TNT) two-plan sampling system based on the widely used capability index Cpk is developed for product acceptance determination when the quality characteristic of products has two-sided specification limits and follows a normal distribution. The operating procedure and operating characteristic (OC) function of the variables TNT two-plan sampling system, and the conditions for solving plan parameters are provided. The behavior of OC curves for the variables TNT sampling system under various parameters is also studied, and compared with the variables single tightened inspection plan and single normal inspection plan.  相似文献   

2.
A Shewhart procedure is used to simultaneously control the standard deviations of quality characteristics assumed to have a bivariate normal distribution. Following Krishnaiah et al (1963), we use the bivariate chi-square distribution to determine probabilities of out-of-control signals and thus the respective average run lengths (ARLs). Results from an example indicate that for both one-sided and two-sided cases, signals occur only slightly more quickly for changes in the process standard deviations for uncorrected variables than for correlated variables.  相似文献   

3.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

4.
Process capability indices (PCIs) are extensively used in the manufacturing industries in order to confirm whether the manufactured products meet their specifications or not. PCIs can be used to judge the process precision, process accuracy, and the process performance. So developing of sampling plans based on PCIs is inevitable and those plans will be very much useful for maintaining and improving the product quality in the manufacturing industries. In view of this, we propose a variables sampling system based on the process capability index Cpmk, which takes into account of process yield and process loss, when the quality characteristic under study will have double specification limits. The proposed sampling system will be effective in compliance testing. The advantages of this system over the existing sampling plans are also discussed. In order to determine the optimal parameters, tables are also constructed by formulating the problem as a nonlinear programming in which the average sample number is minimized by satisfying the producer and consumer risks.  相似文献   

5.
In some applications, the quality of the process or product is characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. Profile monitoring is a technique for checking the stability of the relationship over time. Existing linear profile monitoring methods usually assumed the error distribution to be normal. However, this assumption may not always be true in practice. To address this situation, we propose a method for profile monitoring under the framework of generalized linear models when the relationship between the mean and variance of the response variable is known. Two multivariate exponentially weighted moving average control schemes are proposed based on the estimated profile parameters obtained using a quasi-likelihood approach. The performance of the proposed methods is evaluated by simulation studies. Furthermore, the proposed method is applied to a real data set, and the R code for profile monitoring is made available to users.  相似文献   

6.
ABSTRACT

In some applications, the quality of a process or product is best characterized by a functional relationship between a response variable and one or more explanatory variables. Profile monitoring is used to understand and to check the stability of this relationship or curve over time. In the existing simple linear regression profile models, it is often assumed that the data follow a single mode distribution and consequently the noise of the functional relationship follows a normal distribution. However, in some applications, it is likely that the data may follow a multiple-modes distribution. In this case, it is more appropriate to assume that the data follow a mixture profile. In this study, we focus on a mixture simple linear profile model, and propose new control schemes for Phase II monitoring. The proposed methods are shown to have good performance in a simulation study.  相似文献   

7.
ABSTRACT

The distributions of algebraic functions of random variables are important in theory of probability and statistics and other areas such as engineering, reliability, and actuarial applications, and many results based on various distributions are available in the literature. The two-sided power distribution is defined on a bounded range, and it is a generalization of the uniform, triangular, and power-function probability distributions. This paper gives the exact distribution of the product of two independent two-sided power-distributed random variables in a computable representation. The percentiles of the product are then computed, and a real data application is given.  相似文献   

8.
Sampling plans are a useful tool to decide whether large-size lots should be accepted or rejected. In this paper we introduce double sampling plans by variables for a normally distributed characteristic with known standard deviation and two-sided specification limits. These plans fulfill the classical two-points-condition on the operating characteristic (OC) and feature minimal maximal average sample number (ASN).  相似文献   

9.
In some applications of statistical quality control, quality of a process or a product is best characterized by a functional relationship between a response variable and one or more explanatory variables. This relationship is referred to as a profile. In certain cases, the quality of a process or a product is better described by a non-linear profile which does not follow a specific parametric model. In these circumstances, nonparametric approaches with greater flexibility in modeling the complicated profiles are adopted. In this research, the spline smoothing method is used to model a complicated non-linear profile and the Hotelling T2 control chart based on the spline coefficients is used to monitor the process. After receiving an out-of-control signal, a maximum likelihood estimator is employed for change point estimation. The simulation studies, which include both global and local shifts, provide appropriate evaluation of the performance of the proposed estimation and monitoring procedure. The results indicate that the proposed method detects large global shifts while it is very sensitive in detecting local shifts.  相似文献   

10.
A common task in quality control is to determine a control limit for a product at the time of release that incorporates its risk of degradation over time. Such a limit for a given quality measurement will be based on empirical stability data, the intended shelf life of the product and the stability specification. The task is particularly important when the registered specifications for release and stability are equal. We discuss two relevant formulations and their implementations in both a frequentist and Bayesian framework. The first ensures that the risk of a batch failing the specification is comparable at release and at the end of shelf life. The second is to screen out batches at release time that are at high risk of failing the stability specification at the end of their shelf life. Although the second formulation seems more natural from a quality assurance perspective, it usually renders a control limit that is too stringent. In this paper we provide theoretical insight in this phenomenon, and introduce a heat-map visualisation that may help practitioners to assess the feasibility of implementing a limit under the second formulation. We also suggest a solution when infeasible. In addition, the current industrial benchmark is reviewed and contrasted to the two formulations. Computational algorithms for both formulations are laid out in detail, and illustrated on a dataset.  相似文献   

11.
In many situations, the quality of a process or product may be better characterized and summarized by a relationship between the response variable and one or more explanatory variables. Parameter estimation is the first step in constructing control charts. Outliers may hamper proper classical estimators and lead to incorrect conclusions. To remedy the problem of outliers, robust methods have been developed recently. In this article, a robust method is introduced for estimating the parameters of simple linear profiles. Two weight functions, Huber and Bisquare, are applied in the estimation algorithm. In addition, a method for robust estimation of the error terms variance is proposed. Simulation studies are done to investigate and evaluate the performance of the proposed estimator, as well as the classical one, in the presence and absence of outliers under different scenarios by the means of MSE criterion. The results reveal that the robust estimators proposed in this research perform as well as classical estimators in the absence of outliers and even considerably better when outliers exist. The maximum value of variance estimate in one scenario obtained from classical estimator is 10.9, while this value is 1.66 and 1.27 from proposed robust estimators when its actual value is 1.  相似文献   

12.
Abstract

Profile monitoring is applied when the quality of a product or a process can be determined by the relationship between a response variable and one or more independent variables. In most Phase II monitoring approaches, it is assumed that the process parameters are known. However, it is obvious that this assumption is not valid in many real-world applications. In fact, the process parameters should be estimated based on the in-control Phase I samples. In this study, the effect of parameter estimation on the performance of four Phase II control charts for monitoring multivariate multiple linear profiles is evaluated. In addition, since the accuracy of the parameter estimation has a significant impact on the performance of Phase II control charts, a new cluster-based approach is developed to address this effect. Moreover, we evaluate and compare the performance of the proposed approach with a previous approach in terms of two metrics, average of average run length and its standard deviation, which are used for considering practitioner-to-practitioner variability. In this approach, it is not necessary to know the distribution of the chart statistic. Therefore, in addition to ease of use, the proposed approach can be applied to other type of profiles. The superior performance of the proposed method compared to the competing one is shown in terms of all metrics. Based on the results obtained, our method yields less bias with small-variance Phase I estimates compared to the competing approach.  相似文献   

13.
Products that do not meet the specification criteria of an intended buyer represent a challenge to the producer in maximizing profits. To understand the value of the optimal process target (OPT) set at a profit-maximizing level, a model was developed by Shao et al. (1999) involving multiple markets and finished products having holding costs independent from their quality. Investigation in cases considered previously has involved holding costs as a fixed amount or as a normal random variable independent of the quality characteristic (QC) of the product. Less specific in nature, this study considers more general cases in which the HC can be a truncated normal random variable, which is dependent on the QC of the product.  相似文献   

14.
In biomedical studies, it is of substantial interest to develop risk prediction scores using high-dimensional data such as gene expression data for clinical endpoints that are subject to censoring. In the presence of well-established clinical risk factors, investigators often prefer a procedure that also adjusts for these clinical variables. While accelerated failure time (AFT) models are a useful tool for the analysis of censored outcome data, it assumes that covariate effects on the logarithm of time-to-event are linear, which is often unrealistic in practice. We propose to build risk prediction scores through regularized rank estimation in partly linear AFT models, where high-dimensional data such as gene expression data are modeled linearly and important clinical variables are modeled nonlinearly using penalized regression splines. We show through simulation studies that our model has better operating characteristics compared to several existing models. In particular, we show that there is a non-negligible effect on prediction as well as feature selection when nonlinear clinical effects are misspecified as linear. This work is motivated by a recent prostate cancer study, where investigators collected gene expression data along with established prognostic clinical variables and the primary endpoint is time to prostate cancer recurrence. We analyzed the prostate cancer data and evaluated prediction performance of several models based on the extended c statistic for censored data, showing that 1) the relationship between the clinical variable, prostate specific antigen, and the prostate cancer recurrence is likely nonlinear, i.e., the time to recurrence decreases as PSA increases and it starts to level off when PSA becomes greater than 11; 2) correct specification of this nonlinear effect improves performance in prediction and feature selection; and 3) addition of gene expression data does not seem to further improve the performance of the resultant risk prediction scores.  相似文献   

15.
This paper presents the results of market share modelling for individual segments of the UK tea market using scanner panel data. The study is novel in its introduction of the use of volatility as one of the bases for segmentation, others being usage, loyalty or switching between product types and product forms. The segmentation is undertaken on an a priori, quasi-experimental basis, allowing nested tests of constancy of elasticities across segments. The estimated equations (using seemingly unrelated regressions) benefit from extensive specification, including four diff erent forms for the price variable, four variables for promotion, and six for product characteristic, distribution and macroeconomic variables. Tests for the constancy of the parameters across segments show the segmentation to be successful.  相似文献   

16.
In this paper, we are concerned with pure statistical Shewhart control charts for the scale parameter of the three-parameter Weibull control variable, where, and are the location, the scale and the shape parameters, respectively, with fixed (FSI) and variable (VSI) sampling intervals. The parameters and are assumed to be known. We consider two-sided, and lower and upper one-sided Shewhart control charts and their FSI and VSI versions . They jointly control the mean and the variance of the Weibull control variable X. The pivotal statistic of those control charts is the maximum-likelihood estimator of for the Nth random sample XN=(X1N,X2N,...,XnN) of the Weibull control variable X. The design and performance of these control charts are studied. Two criteria, i.e. 'comparability criterion' (or 'matched criterion') under control and 'primordial criterion', are imposed on their design. The performance of these control charts is measured using the function average time to signal. For the VSI versions, the constant which defines the partition of the 'continuation region' is obtained through the 'comparability criterion' under control. The monotonic behaviour of the function average time to signal in terms of the parameters (magnitude of the shift suff ered by the target value 0), and is studied. We show that the function average time to signal of all the control charts studied in this paper does not depend on the value of the parameter or on 0, and, under control, does not depend on the parameter, when Delta (the probability of a false alarm) and n (sample size) are fixed. All control charts satisfy the 'primordial criterion' and, for fixed, on average, they all (except the two-sided VSI, for which we were not able to ascertain proof) are quicker in detecting the shift as increases. We conjecture - and we are not contradicted by the numerical example considered - that the same is true for the two-sided VSI control chart. We prove that, under the average time to signal criterion, the VSI versions are always preferable to their FSI versions. In the case of one-sided control charts, under the 'comparability criterion', the VSI version is always preferable to the FSI version, and this advantage increases with and the extent of the shift. Our one-sided control charts perform better and have more powerful statistical properties than does our two-sided control chart. The numerical example where n=5,0=1,=0.5, 1.0, 2.0, and Delta=1/370.4 is presented for the two-sided, and the lower and upper one-sided control charts. These numerical results are presented in tables and in figures. The joint influence of the parameters and in the function average time to signal is illustrated.  相似文献   

17.
This article proposes a new data‐based prior distribution for the error variance in a Gaussian linear regression model, when the model is used for Bayesian variable selection and model averaging. For a given subset of variables in the model, this prior has a mode that is an unbiased estimator of the error variance but is suitably dispersed to make it uninformative relative to the marginal likelihood. The advantage of this empirical Bayes prior for the error variance is that it is centred and dispersed sensibly and avoids the arbitrary specification of hyperparameters. The performance of the new prior is compared to that of a prior proposed previously in the literature using several simulated examples and two loss functions. For each example our paper also reports results for the model that orthogonalizes the predictor variables before performing subset selection. A real example is also investigated. The empirical results suggest that for both the simulated and real data, the performance of the estimators based on the prior proposed in our article compares favourably with that of a prior used previously in the literature.  相似文献   

18.
19.
This paper proposes a variables quick switching system where the quality characteristic of interest follows a normal distribution and the quality characteristic is evaluated through a process loss function. Most of the variables sampling plans available in the literature focus only on the fraction non-conforming and those plans do not distinguish between the products that fall within the specification limits. The products that fall within specification limits may not be good if their mean is too away from the target value. So developing a sampling plan by considering process loss is inevitable in these situations. Based on this idea, we develop a variables quick switching system based on the process loss function for the application of the processes requiring low process loss. Tables are also constructed for the selection of parameters of variables quick switching system for given acceptable quality level and limiting quality level. The results are explained with examples.  相似文献   

20.
Multivariate failure time data also referred to as correlated or clustered failure time data, often arise in survival studies when each study subject may experience multiple events. Statistical analysis of such data needs to account for intracluster dependence. In this article, we consider a bivariate proportional hazards model using vector hazard rate, in which the covariates under study have different effect on two components of the vector hazard rate function. Estimation of the parameters as well as base line hazard function are discussed. Properties of the estimators are investigated. We illustrated the method using two real life data. A simulation study is reported to assess the performance of the estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号