首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Tolerance limits are limits that include a specified proportion of the population at a given confidence level. They are used to make sure that the production will not be outside specifications. Tolerance limits are either designed based on the normality assumption, or nonparametric tolerance limits are established. In either case, no provision for autocorrelated processes is made in the available design tables of tolerance limits. It is shown how to construct tolerance limits to cover a specified proportion of the population when autocorrelation is present in the process. A comparison of four different tolerance limits is provided, and recommendations are given for choosing the "best" estimator of the process variability for the construction of tolerance limits.  相似文献   

2.

Amin et al. (1999) developed an exponentially weighted moving average (EWMA) control chart, based on the smallest and largest observations in each sample. The resulting plot of the extremes suggests that the MaxMin EWMA may also be viewed as smoothed tolerance limits. Tolerance limits are limits that include a specific proportion of the population at a given confidence level. In the context of process control, they are used to make sure that production will not be outside specifications. Amin and Li (2000) provided the coverages of the MaxMin EWMA tolerance limits for independent data. In this article, it is shown how autocorrelation affects the confidence level of MaxMin tolerance limits, for a specified level of coverage of the population, and modified smoothed tolerance limits are suggested for autocorrelated processes.  相似文献   

3.
A procedure for constructing one-sided tolerance limits for a normal distribution which are based on a censored sample is given. The factors necessary for the calculation of such limits are also given for several different sample sizes  相似文献   

4.
In this paper nonparametric simultaneous tolerance limits are developed using rectangle probabilities for uniform order statistics. Consideration is given to the handling of censored data, and some comparisons are made with the parametric normal theory. The nonparametric regional estimation techniques of (i) confidence bands for a distribution function, (ii) simultaneous confidence intervals for quantiles and (iii) simultaneous tolerance limits are unified. A Bayesian approach is also discussed.  相似文献   

5.
6.
This article generalizes the Monte Carlo Markov Chain (MCMC) algorithm, based on the Gibbs weighted Chinese restaurant (gWCR) process algorithm, for a class of kernel mixture of time series models over the Dirichlet process. This class of models is an extension of Lo’s (Ann. Stat. 12:351–357, 1984) kernel mixture model for independent observations. The kernel represents a known distribution of time series conditional on past time series and both present and past latent variables. The latent variables are independent samples from a Dirichlet process, which is a random discrete (almost surely) distribution. This class of models includes an infinite mixture of autoregressive processes and an infinite mixture of generalized autoregressive conditional heteroskedasticity (GARCH) processes.  相似文献   

7.
Inverse Gaussian distribution has been used in a wide range of applications in modeling duration and failure phenomena. In these applications, one-sided lower tolerance limits are employed, for instance, for designing safety limits of medical devices. Tang and Chang (1994) proposed lowersided tolerance limits via Bonferroni inequality when parameters in the inverse Gaussian distribution are unknown. However, their simulation results showed conservative coverage probabilities, and consequently larger interval width. In their paper, they also proposed an alternative to construct lesser conservative limits. But simulation results yielded unsatisfactory coverage probabilities in many cases. In this article, the exact lower-sided tolerance limit is proposed. The proposed limit has a similar form to that of the confidence interval for mean under inverse Gaussian. The comparison between the proposed limit and Tang and Chang's method is compared via extensive Monte Carlo simulations. Simulation results suggest that the proposed limit is superior to Tang and Chang's method in terms of narrower interval width and approximate to nominal level of coverage probability. Similar argument can be applied to the formulation of two-sided tolerance limits. A summary and conclusion of the proposed limits is included.  相似文献   

8.
This paper provides a practical simulation-based Bayesian analysis of parameter-driven models for time series Poisson data with the AR(1) latent process. The posterior distribution is simulated by a Gibbs sampling algorithm. Full conditional posterior distributions of unknown variables in the model are given in convenient forms for the Gibbs sampling algorithm. The case with missing observations is also discussed. The methods are applied to real polio data from 1970 to 1983.  相似文献   

9.
In modern quality control, it is becoming common to simultaneously monitor several quality characteristics of a process with rapid evolving data-acquisition technology. When the multivariate process distribution is unknown and only a set of in-control data is available, the bootstrap technique can be used to adjust the constant limit of the multivariate cumulative sum (MCUSUM) control chart. To further improve the performance of the control chart, we extend the constant control limit to a sequence of dynamic control limits which are determined by the conditional distribution of the charting statistics given the sprint length. Simulation results show that the novel control chart with dynamic control limits offers a better ARL performance, compared with the traditional MCUSUM control chart. Despite it, the proposed control chart is considerably computer-intensive. This leads to the development of a more flexible control chart which uses a continuous function of the sprint length as the control limit sequences. More importantly, the control chart is easy to implement and can reduce the computational time significantly. A white wine data illustrates that the novel control chart performs quite well in applications.  相似文献   

10.
Cumulative distribution function of the variable Y=(U+c)/(Z/2ν)) is given. Here U and Z are independent random variables, U has the exponential distribution (1.1) with θ=0, σ=1, Z has the distribution χ2 (2ν) and c is a real quantity. The variable Y with U and Z given by (2.2) and (2.3) is used for inference about the parametric functions ?=θ?kσ of a two-parameter exponential distribution (1.1) with k or ? known. Special cases of ? or k are: the parameter θ, the Pth quantile Xp, the mean θ+σ and the value of the cumulative distribution function or of the reliability function at given point a. Also one-sided tolerance limits for a two-parameter exponential distribution can be derived from the distribution of the variable Y. The results are also applied to the Pareto distribution.  相似文献   

11.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

12.
Summary.  The forward–backward algorithm is an exact filtering algorithm which can efficiently calculate likelihoods, and which can be used to simulate from posterior distributions. Using a simple result which relates gamma random variables with different rates, we show how the forward–backward algorithm can be used to calculate the distribution of a sum of gamma random variables, and to simulate from their joint distribution given their sum. One application is to calculating the density of the time of a specific event in a Markov process, as this time is the sum of exponentially distributed interevent times. This enables us to apply the forward–backward algorithm to a range of new problems. We demonstrate our method on three problems: calculating likelihoods and simulating allele frequencies under a non-neutral population genetic model, analysing a stochastic epidemic model and simulating speciation times in phylogenetics.  相似文献   

13.
This paper proposes a variables quick switching system where the quality characteristic of interest follows a normal distribution and the quality characteristic is evaluated through a process loss function. Most of the variables sampling plans available in the literature focus only on the fraction non-conforming and those plans do not distinguish between the products that fall within the specification limits. The products that fall within specification limits may not be good if their mean is too away from the target value. So developing a sampling plan by considering process loss is inevitable in these situations. Based on this idea, we develop a variables quick switching system based on the process loss function for the application of the processes requiring low process loss. Tables are also constructed for the selection of parameters of variables quick switching system for given acceptable quality level and limiting quality level. The results are explained with examples.  相似文献   

14.
In the life test, predicting higher failure times than the largest failure time of the observed is an important issue. Although the Rayleigh distribution is a suitable model for analyzing the lifetime of components that age rapidly over time because its failure rate function is an increasing linear function of time, the inference for a two-parameter Rayleigh distribution based on upper record values has not been addressed from the Bayesian perspective. This paper provides Bayesian analysis methods by proposing a noninformative prior distribution to analyze survival data, using a two-parameter Rayleigh distribution based on record values. In addition, we provide a pivotal quantity and an algorithm based on the pivotal quantity to predict the behavior of future survival records. We show that the proposed method is superior to the frequentist counterpart in terms of the mean-squared error and bias through Monte carlo simulations. For illustrative purposes, survival data on lung cancer patients are analyzed, and it is proved that the proposed model can be a good alternative when prior information is not given.  相似文献   

15.
Control charts have been popularly used as a user-friendly yet technically sophisticated tool to monitor whether a process is in statistical control or not. These charts are basically constructed under the normality assumption. But in many practical situations in real life this normality assumption may be violated. One such non-normal situation is to monitor the process variability from a skewed parent distribution where we propose the use of a Maxwell control chart. We introduce a pivotal quantity for the scale parameter of the Maxwell distribution which follows a gamma distribution. Probability limits and L-sigma limits are studied along with performance measure based on average run length and power curve. To avoid the complexity of future calculations for practitioners, factors for constructing control chart for monitoring the Maxwell parameter are given for different sample sizes and for different false alarm rate. We also provide simulated data to illustrate the Maxwell control chart. Finally, a real life example has been given to show the importance of such a control chart.  相似文献   

16.
This paper discusses five methods for constructing approximate confidence intervals for the binomial parameter Θ, based on Y successes in n Bernoulli trials. In a recent paper, Chen (1990) discusses various approximate methods and suggests a new method based on a Bayes argument, which we call method I here. Methods II and III are based on the normal approximation without and with continuity correction. Method IV uses the Poisson approximation of the binomial distribution and then exploits the fact that the exact confidence limits for the parameter of the Poisson distribution can be found through the x2 distribution. The confidence limits of method IV are then provided by the Wilson-Hilferty approximation of the x2. Similarly, the exact confidence limits for the binomial parameter can be expressed through the F distribution. Method V approximates these limits through a suitable version of the Wilson-Hilferty approximation. We undertake a comparison of the five methods in respect to coverage probability and expected length. The results indicate that method V has an advantage over Chen's Bayes method as well as over the other three methods.  相似文献   

17.
For a postulated common odds ratio for several 2 × 2 contingency tables one may, by conditioning on the marginals of the seperate tables, determine the exact expectation and variance of the entry in a particular cell of each table, hence for the total of such cells across all tables. This makes it feasible to determine limiting values, via single-degree-of-freedom, continuity-corrected chi-square tests on the common odds ratio–one determines lower and upper limits corresponding to just barely significant chi-square values. The Mantel-Haenszel approach can be viewed as a special application of this, but directed specifically to the case of unity for the odds ratio, for which the expectation and variance formulas are particularly simple. Computation of exact expectations and variances may be feasible only for 2 × 2 tables of limited size, but asymptotic formulas can be applied in other instances.Illustration is given for a particular set of four 2 × 2 tables in which both exact limits and limits by the proposed method could be applied, the two methods giving reasonably good agreement. Both procedures are directed at the distribution of the total over the designated cells, the proposed method treating that distribution as being asymptotically normal. Especially good agreement of proposed with exact limits could be anticipated in more asymptotic situations (overall, not for individual tables) but in practice this may not be demonstrable as the computation of exact limits is then unfeasible.  相似文献   

18.
19.
20.
In this article, we investigate techniques for constructing tolerance limits such that the probability is γ that at least p proportion of the population would exceed that limit. We consider the unbalanced case and study the behavior of the limit as a function of ni 's (where ni is the number of observations in the ith batch), as well as that of the variance ratio. To construct the tolerance limits we use the approximation given in Thomas and Hultquist (1978). We also discuss the procedure for constructing the tolerance limits when the variance ratio is unknown. An example is given to illustrate the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号