首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 131 毫秒
1.
2.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   

3.
Inverse Gaussian distribution has been used in a wide range of applications in modeling duration and failure phenomena. In these applications, one-sided lower tolerance limits are employed, for instance, for designing safety limits of medical devices. Tang and Chang (1994) proposed lowersided tolerance limits via Bonferroni inequality when parameters in the inverse Gaussian distribution are unknown. However, their simulation results showed conservative coverage probabilities, and consequently larger interval width. In their paper, they also proposed an alternative to construct lesser conservative limits. But simulation results yielded unsatisfactory coverage probabilities in many cases. In this article, the exact lower-sided tolerance limit is proposed. The proposed limit has a similar form to that of the confidence interval for mean under inverse Gaussian. The comparison between the proposed limit and Tang and Chang's method is compared via extensive Monte Carlo simulations. Simulation results suggest that the proposed limit is superior to Tang and Chang's method in terms of narrower interval width and approximate to nominal level of coverage probability. Similar argument can be applied to the formulation of two-sided tolerance limits. A summary and conclusion of the proposed limits is included.  相似文献   

4.

Tolerance limits are limits that include a specified proportion of the population at a given confidence level. They are used to make sure that the production will not be outside specifications. Tolerance limits are either designed based on the normality assumption, or nonparametric tolerance limits are established. In either case, no provision for autocorrelated processes is made in the available design tables of tolerance limits. It is shown how to construct tolerance limits to cover a specified proportion of the population when autocorrelation is present in the process. A comparison of four different tolerance limits is provided, and recommendations are given for choosing the "best" estimator of the process variability for the construction of tolerance limits.  相似文献   

5.
A self-contained FORTRAN subroutine is provided which computes factors for Wald-Wolfowitz type tolerance limits allowing arbitrary combinations of sample size n and degrees of freedom ν. The exact calculations from our program reveal inadequacies of two existing approximations, especially when ν ? n. Numerous applications where νn ? 1 are cited; two of these are discussed and illustrated.  相似文献   

6.
In this article, we investigate techniques for constructing tolerance limits such that the probability is γ that at least p proportion of the population would exceed that limit. We consider the unbalanced case and study the behavior of the limit as a function of ni 's (where ni is the number of observations in the ith batch), as well as that of the variance ratio. To construct the tolerance limits we use the approximation given in Thomas and Hultquist (1978). We also discuss the procedure for constructing the tolerance limits when the variance ratio is unknown. An example is given to illustrate the results.  相似文献   

7.
Tolerance limits are those limits that contain a certain proportion of the distribution of a characteristic with a given probability. 'They are used to make sure that the production will not be outside of specifications' (Amin & Lee, 1999). Usually, tolerance limits are constructed at the beginning of the monitoring of the process. Since they are calculated just one time, these tolerance limits cannot reflect changes of tolerance level over the lifetime of the process. This research proposes an algorithm to construct tolerance limits continuously over time for any given distribution. This algorithm makes use of the exponentially weighted moving average (EWMA) technique. It can be observed that the sample size required by this method is reduced over time.  相似文献   

8.
Although the bivariate normal distribution is frequently employed in the development of screening models, the formulae for computing bivariate normal probabilities are quite complicated. A simple and accurate error-bounded, noniterative approximation for bivariate normal probabilities based on a simple univariate normal quadratic or cubic approximation is developed for use in screening applications. The approximation, which is most accurate for large absolute correlation coefficients, is especially suitable for screening applications (e.g., in quality control), where large absolute correlations between performance and screening variables are desired. A special approximation for conditional bivariate normal probabilities is also provided which in quality control screening applications improves the accuracy of estimating the average outgoing product quality. Some anomalies in computing conditional bivariate normal probabilities using BNRDF and NORDF in IMSL are also discussed.  相似文献   

9.
This article proposes a new procedure for obtaining one-sided tolerance limits in unbalanced random effects models. The procedure is a generalization of that proposed by Mee and Owen for the balanced situation, and can be easily implemented, because it only needs a non-central-t table. Two simulation studies are carried out to assess the performance of the new procedure and to compare it with one of the other procedures laid out in previous statistical literature. The article findings show that the new procedure is much simpler to compute and performs better than the previous ones, having inferior values of the gamma bias in a wide range of situations, representative of many actual industrial applications, and behaving also reasonably well in more extreme sampling situations. The use of the new limits is illustrated by an application to an actual example from the steel industry.  相似文献   

10.

Amin et al. (1999) developed an exponentially weighted moving average (EWMA) control chart, based on the smallest and largest observations in each sample. The resulting plot of the extremes suggests that the MaxMin EWMA may also be viewed as smoothed tolerance limits. Tolerance limits are limits that include a specific proportion of the population at a given confidence level. In the context of process control, they are used to make sure that production will not be outside specifications. Amin and Li (2000) provided the coverages of the MaxMin EWMA tolerance limits for independent data. In this article, it is shown how autocorrelation affects the confidence level of MaxMin tolerance limits, for a specified level of coverage of the population, and modified smoothed tolerance limits are suggested for autocorrelated processes.  相似文献   

11.
In this paper nonparametric simultaneous tolerance limits are developed using rectangle probabilities for uniform order statistics. Consideration is given to the handling of censored data, and some comparisons are made with the parametric normal theory. The nonparametric regional estimation techniques of (i) confidence bands for a distribution function, (ii) simultaneous confidence intervals for quantiles and (iii) simultaneous tolerance limits are unified. A Bayesian approach is also discussed.  相似文献   

12.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   

13.
A procedure for constructing one-sided tolerance limits for a normal distribution which are based on a censored sample is given. The factors necessary for the calculation of such limits are also given for several different sample sizes  相似文献   

14.
Modern statistical applications involving large data sets have focused attention on statistical methodologies which are both efficient computationally and able to deal with the screening of large numbers of different candidate models. Here we consider computationally efficient variational Bayes approaches to inference in high-dimensional heteroscedastic linear regression, where both the mean and variance are described in terms of linear functions of the predictors and where the number of predictors can be larger than the sample size. We derive a closed form variational lower bound on the log marginal likelihood useful for model selection, and propose a novel fast greedy search algorithm on the model space which makes use of one-step optimization updates to the variational lower bound in the current model for screening large numbers of candidate predictor variables for inclusion/exclusion in a computationally thrifty way. We show that the model search strategy we suggest is related to widely used orthogonal matching pursuit algorithms for model search but yields a framework for potentially extending these algorithms to more complex models. The methodology is applied in simulations and in two real examples involving prediction for food constituents using NIR technology and prediction of disease progression in diabetes.  相似文献   

15.
In this paper, a confidence interval for the lOOpth percentile of the Birnbaum-Saunders distribution is constructed. Conservative two-sided tolerance limits are then obtained from the confidence limits. These results are useful for reliability evaluation when using the Birnbaum-Saunders model. A simple scheme for generating Birnbaum-Saunders random variates is derived. This is used for a simulation study on investigating the effectiveness of the proposed confidence interval in terms of its coverage probability.  相似文献   

16.
Taguchi (1984,1987) has derived tolerances for subsystems, subcomponents, parts and materials. However, he assumed that the relationship between a higher rank and a lower rank quality characteristic is deterministic. The basic structure of the above tolerance design problem is very similar to that of the screening problem. Tang (1987) proposed three cost models and derived an economic design for the screening problem of “the-bigger-the-better” quality characteristic in which the optimal specification limit ( or tolerance ) for a screening variable ( or a lower rank quality characteristic ) was obtained by minimizing the expected total cost function.Tang considered that the quality cost incurred only when the quality characteristic is out of specification while Taguchi considered that the quality cost incurred whenever the quality characteristic deviates from its nominal value. In this paper, a probabilistic relationship, namely, a bivariate normal distribution between the above two qualy characteristics as in a screening problem as well as Taguchi's quadratic loss function are considered together to develop a closed form solution of the tolerance design for a subsystem.  相似文献   

17.
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented.  相似文献   

18.
This article discusses sampling plans, that is, the allocation of sampling units, for computing tolerance limits in a balanced one--way random-effects model. The expected width of the tolerance interval is derived and used as the basis for comparing different sampling plans. A well-known cost function and examples are used to facilitate the discussion.  相似文献   

19.
In assessing biosimilarity between two products, the question to ask is always “How similar is similar?” Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.  相似文献   

20.
Logistic models with a random intercept are prevalent in medical and social research where clustered and longitudinal data are often collected. Traditionally, the random intercept in these models is assumed to follow some parametric distribution such as the normal distribution. However, such an assumption inevitably raises concerns about model misspecification and misleading inference conclusions, especially when there is dependence between the random intercept and model covariates. To protect against such issues, we use a semiparametric approach to develop a computationally simple and consistent estimator where the random intercept is distribution‐free. The estimator is revealed to be optimal and achieve the efficiency bound without the need to postulate or estimate any latent variable distributions. We further characterize other general mixed models where such an optimal estimator exists.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号