首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Process capability indices have been widely used in the manufacturing industry providing numerical measures on process performance. The index Cp provides measures on process precision (or product consistency). The index Cpm, sometimes called the Taguchi index, meditates on process centring ability and process loss. Most research work related to Cp and Cpm assumes no gauge measurement errors. This assumption insufficiently reflects real situations even with highly advanced measuring instruments. Conclusions drawn from process capability analysis are therefore unreliable and misleading. In this paper, we conduct sensitivity investigation on process capability Cp and Cpm in the presence of gauge measurement errors. Due to the randomness of variations in the data, we consider capability testing for Cp and Cpm to obtain lower confidence bounds and critical values for true process capability when gauge measurement errors are unavoidable. The results show that the estimator with sample data contaminated by the measurement errors severely underestimates the true capability, resulting in imperceptible smaller test power. To obtain the true process capability, adjusted confidence bounds and critical values are presented to practitioners for their factory applications.  相似文献   

2.
Process capability index Cp has been the most popular one used in the manufacturing industry to provide numerical measures on process precision. For normally distributed processes with automatic fully inspections, the inspected processes follow truncated normal distributions. In this article, we provide the formulae of moments used for the Edgeworth approximation on the precision measurement Cp for truncated normally distributed processes. Based on the developed moments, lower confidence bounds with various sample sizes and confidence levels are provided and tabulated. Consequently, practitioners can use lower confidence bounds to determine whether their manufacturing processes are capable of preset precision requirements.  相似文献   

3.
Abstract

This paper introduces a multiscale Gaussian convolution model of Gaussian mixture (MGC-GMM) via the convolution of the GMM and a multiscale Gaussian window function. It is found that the MGC-GMM is still a Gaussian mixture model, and its parameters can be mapped back to the parameters of the GMM. Meanwhile, the multiscale probability density function (MPDF) of the MGC-GMM can be viewed as the mathematical expectation of a random process induced by the Gaussian window function and the GMM, which can be directly estimated by the use of sample data. Based on the estimated MPDF, a novel algorithm denoted by the MGC is proposed for the selection of model and the parameter estimates of the GMM, where the component number and the means of the GMM are respectively determined by the number and the locations of the maximum points of the MPDF, and the numerical algorithms for the weight and variance parameters of the GMM are derived. The MGC is suitable for the GMM with diagonal covariance matrices. A MGC-EM algorithm is also presented for the generalized GMM, where the GMM is estimated using the EM algorithm by taking the estimates from the MGC as initial parameters of the GMM model. The proposed algorithms are tested via a series of simulated sample sets from the given GMM models, and the results show that the proposed algorithms can effectively estimate the GMM model.  相似文献   

4.
Some new upper and lower bounds for the extinction probability of a Galton–Watson process are presented. They are very easy to compute and can be used even if the offspring distribution has infinite variance. These new bounds are numerically compared to previously discussed bounds. Some definite guidelines are given concerning when these new bounds are preferable. Some open problems are also discussed.  相似文献   

5.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates.  相似文献   

6.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates.  相似文献   

7.
The interpretation of Cpk:, a common measure of process capability and confidence limits for it, is based on the assumption that the process is normally distributed. The non-parametric but computer intensive method called Bootstrap is introduced and three Bootstrap confidence interval estimates for C^ are defined. An initial simulation of two processes (one normal and the other highly skewed) is presented and discussed  相似文献   

8.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   

9.
Normal Inverse Gaussian Distributions and Stochastic Volatility Modelling   总被引:4,自引:0,他引:4  
The normal inverse Gaussian distribution is defined as a variance-mean mixture of a normal distribution with the inverse Gaussian as the mixing distribution. The distribution determines an homogeneous Lévy process, and this process is representable through subordination of Brownian motion by the inverse Gaussian process. The canonical, Lévy type, decomposition of the process is determined. As a preparation for developments in the latter part of the paper the connection of the normal inverse Gaussian distribution to the classes of generalized hyperbolic and inverse Gaussian distributions is briefly reviewed. Then a discussion is begun of the potential of the normal inverse Gaussian distribution and Lévy process for modelling and analysing statistical data, with particular reference to extensive sets of observations from turbulence and from finance. These areas of application imply a need for extending the inverse Gaussian Lévy process so as to accommodate certain, frequently observed, temporal dependence structures. Some extensions, of the stochastic volatility type, are constructed via an observation-driven approach to state space modelling. At the end of the paper generalizations to multivariate settings are indicated.  相似文献   

10.
In this article, the effects of mixtures of two normal distributions on the fraction non-conforming are studied in the context of capability analysis. When the output from several processes is mixed, the quality characteristic variables of the resulting mix may result in a normal mixture distribution. This can happen in cases such as monitoring an output from several suppliers, several machines, or several workers. This study considered the independence case and autocorrelated processes for a mixture of two normal distributions, using an autoregressive model of order one, AR(1). It is shown that the true attained process fraction non-conforming (corresponding to specific values for some capability index) can be very different from what is expected when the data are independent normal random variables.  相似文献   

11.
ABSTRACT

Recently considerable research has been devoted to monitoring increases of incidence rate of adverse rare events. This paper extends some one-sided upper exponentially weighted moving average (EWMA) control charts from monitoring normal means to monitoring Poisson rate when sample sizes are varying over time. The approximated average run length bounds are derived for these EWMA-type charts and compared with the EWMA chart previously studied. Extensive simulations have been conducted to compare the performance of these EWMA-type charts. An illustrative example is given.  相似文献   

12.
In recent years, statistical process control (SPC) of multivariate and autocorrelated processes has received a great deal of attention. Modern manufacturing/service systems with more advanced technology and higher production rates can generate complex processes in which consecutive observations are dependent and each variable is correlated. These processes obviously violate the assumption of the independence of each observation that underlies traditional SPC and thus deteriorate the performance of its traditional tools. The popular way to address this issue is to monitor the residuals—the difference between the actual value and the fitted value—with the traditional SPC approach. However, this residuals-based approach requires two steps: (1) finding the residuals; and (2) monitoring the process. Also, an accurate prediction model is necessary to obtain the uncorrelated residuals. Furthermore, these residuals are not the original values of the observations and consequently may have lost some useful information about the targeted process. The main purpose of this article is to examine the feasibility of using one-class classification-based control charts to handle multivariate and autocorrelated processes. The article uses simulated data to present an analysis and comparison of one-class classification-based control charts and the traditional Hotelling's T 2 chart.  相似文献   

13.
For infinite sequences of independent random variables with identical continuous distributions, we establish optimal lower bounds on the deviations of the expectations of record values from population means in units generated by the central absolute moments of various orders. The bounds are non-negative for the classic record values, and non-positive for the other kth records with k?2. We also provide analogous bounds for the record increments.  相似文献   

14.
The central limit theorem indicates that when the sample size goes to infinite, the sampling distribution of means tends to follow a normal distribution; it is the basis for the most usual confidence interval and sample size formulas. This study analyzes what sample size is large enough to assume that the distribution of the estimator of a proportion follows a Normal distribution. Also, we propose the use of a correction factor in sample size formulas to ensure a confidence level even when the central limit theorem does not apply for these distributions.  相似文献   

15.
A simple method producing lower and upper bounds on E max(X1,...,Xn) is presented under assumption that the Xi's are independent normal random variables. Furthermore the upper bounds are determined when the Xi's are normal and positively correlated  相似文献   

16.
The process capability index C pm, sometimes called the loss-based index, has been proposed to the manufacturing industry for measuring process reproduction capability. This index incorporates the variation of production items with respect to the target value and the specification limits preset in the factory. To estimate the loss-based index properly and accurately, certain frequentist and Bayesian perspectives have been proposed to obtain lower confidence bounds (LCBs) for providing minimum process capability. The LCBs not only provide critical information regarding process performance but are also used to determine whether an improvement was made in a capability index and by extension in reducing the fraction of non-conforming items. In this paper, under the assumption of normality, based on frequentist and Bayesian senses, several existing approaches for constructing LCBs of C pm are presented. Depending on the statistical methods used, we then classify these existing approaches into three categories and compared them in terms of the coverage rates and the mean values of the LCBs via simulations. The relative advantages and disadvantages of these approaches are summarized with some highlights of the relevant findings.  相似文献   

17.
Repeated confidence interval (RCI) is an important tool for design and monitoring of group sequential trials according to which we do not need to stop the trial with planned statistical stopping rules. In this article, we derive RCIs when data from each stage of the trial are not independent thus it is no longer a Brownian motion (BM) process. Under this assumption, a larger class of stochastic processes fractional Brownian motion (FBM) is considered. Comparisons of RCI width and sample size requirement are made to those under Brownian motion for different analysis times, Type I error rates and number of interim analysis. Power family spending functions including Pocock, O'Brien-Fleming design types are considered for these simulations. Interim data from BHAT and oncology trials is used to illustrate how to derive RCIs under FBM for efficacy and futility monitoring.  相似文献   

18.
The typical approach in change-point theory is to perform the statistical analysis based on a sample of fixed size. Alternatively, one observes some random phenomenon sequentially and takes action as soon as one observes some statistically significant deviation from the "normal" behaviour. Based on the, perhaps, more realistic situation that the process can only be partially observed, we consider the counting process related to the original process observed at equidistant time points, after which action is taken or not depending on the number of observations between those time points. In order for the procedure to stop also when everything is in order, we introduce a fixed time horizon n at which we stop declaring "no change" if the observed data did not suggest any action until then. We propose some stopping rules and consider their asymptotics under the null hypothesis as well as under alternatives. The main basis for the proofs are strong invariance principles for renewal processes and extreme value asymptotics for Gaussian processes.  相似文献   

19.
Process capability index Cpk has been the most popular one used in the manufacturing industry dealing with problems of measuring reproduction capability of processes to enhance product development with very low fraction of defectives. In the manufacturing industry, lower confidence bound (LCB) estimates the minimum process capability providing pivotal information for quality engineers to monitoring the process and assessing process performance for quality assurance. The main objective of this paper is to compare and contrast the LCBs on Cpk using two approaches, Classical method and Bayesian method.  相似文献   

20.
The counting process with the Cox-type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time-transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co-workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson-type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple-infection data taken from a clinical study on chronic granulomatous disease is also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号