首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Continuing their investigation of distributions arising from faulty and "incomplete" inspection from a finite popu-lation (Johnson et al. (1980), Comm. Statist.A9 917-922, Kotz and Johnson, ibid, All, 1997-2016) the authors investigate cases when inspection is not restricted to a single type of defect. Applications to grading problems, when grading is based on sets of defects found by sampling inspections, are indicated.  相似文献   

2.
For continuous inspection schemes in an automated manufacturing environment, a useful alternative to the traditional p or np chart is the Run-Length control chart, which is based on plotting the run lengths (the number of conforming items) between successive nonconforming items. However, its establishment relies on the error-free inspection assumption, which can seldom be met in practice. In this paper, the effects of inspection errors on the Run-Length chart are investigated based on that these errors are assumed known. The actual false alarm probability and the average number inspected (ANI) in the presence of inspection errors are studied. This paper also presents the adjusted control limits for the Run-Length chart, which can provide much closer ANI curves to the ones obtained under error-free inspection.  相似文献   

3.
Graff and Roeloffs' (1972) modification of the Dorfman (1943) screening procedure, and their analysis of the effects of inspection error on properties of the procedure, is extended to hierarchical procedures, using the results of Kotz and Johnson (1982).  相似文献   

4.
We describe an algorithm to fit an SU -Curve of the Johnson system by moment matching. The algorithm follows from a new parametrization, and reduces the problem to a root finding procedure that can be implemented efficiently using a bisection or a Newton-Raphson method. This allows the four parameters of the Johnson curve to be determined to any desired degree of accuracy, and is fast enough to be implemented in a real-time setting. A practical application of the method lies in the fact that many firms use the Johnson system to manage financial risk  相似文献   

5.
Volume 3 of Analysis of Messy Data by Milliken & Johnson (2002) provides detailed recommendations about sequential model development for the analysis of covariance. In his review of this volume, Koehler (2002) asks whether users should be concerned about the effect of this sequential model development on the coverage probabilities of confidence intervals for comparing treatments. We present a general methodology for the examination of these coverage probabilities in the context of the two‐stage model selection procedure that uses two F tests and is proposed in Chapter 2 of Milliken & Johnson (2002). We apply this methodology to an illustrative example from this volume and show that these coverage probabilities are typically very far below nominal. Our conclusion is that users should be very concerned about the coverage probabilities of confidence intervals for comparing treatments constructed after this two‐stage model selection procedure.  相似文献   

6.
Reviews     
Abstract

Beth Ashmore reviews The Purpose-Based Library: Finding Your Path to Survival, Success and Growth; Scott Johnson reviews Metaliteracy in Practice; Catherine Johnson reviews Teaching Information Literacy Reframed: 50+ Framework-Based Exercises for Creating Information-Literate Learners; Adolfo G. Prieto reviews Dynamic Research Support for Academic Libraries.  相似文献   

7.
Abstract

The paper is concerned with an acceptance sampling problem under destructive inspections for one-shot systems. The systems may fail at random times while they are operating (as the systems are considered to be operating when storage begins), and these failures can only be identified by inspection. Thus, n samples are randomly selected from N one-shot systems for periodic destructive inspection. After storage time T, the N systems are replaced if the number of working systems is less than a pre-specified threshold k. The primary purpose of this study is to determine the optimal number of samples n*, extracted from the N for destructive detection and the optimal acceptance number k*, in the sample under the constraint of the system interval availability, to minimize the expected cost rate. Numerical experiments are studied to investigate the effect of the parameters in sampling inspection on the optimal solutions.  相似文献   

8.
The primary purpose of sampling inspection is the protection of consumer’s interests. Although under simple cost models, sampling inspection never serves the producer’s interest, some form of sampling inspection can be beneficial to the consumer under the same assumptions. We consider the case of isolated lot inspection and examine the consumer risk, economic sample design, and errors in the inspection process. Acceptance sampling is shown to be cost-effective to the consumer whenever the lot quality is less than perfect, and even for perfect lot quality in the presence of inspection errors.  相似文献   

9.
Heteroscedasticity checking in regression analysis plays an important role in modelling. It is of great interest when random errors are correlated, including autocorrelated and partial autocorrelated errors. In this paper, we consider multivariate t linear regression models, and construct the score test for the case of AR(1) errors, and ARMA(s,d) errors. The asymptotic properties, including asymptotic chi-square and approximate powers under local alternatives of the score tests, are studied. Based on modified profile likelihood, the adjusted score test is also developed. The finite sample performance of the tests is investigated through Monte Carlo simulations, and also the tests are illustrated with two real data sets.  相似文献   

10.
The well-known Johnson system of distributions was developed by N. L. Johnson (1949). Slifker and Shapiro (1980) presented a criterion for choosing a member from the three distributional classes (SB,SL, and Sv) in the Johnson system to fit a set of data. The criterion is based on the value of a quantile ratio which depends on a specified positive z value and the parameters of the distribution. In this paper, we present some properties of the quantile ratio for various distributions and for some selected z values. Some comments are made on using the criterion for selecting a Johnson distribution to fit empirical data.  相似文献   

11.
Meta-analysis refers to a quantitative method for combining results from independent studies in order to draw overall conclusions. We consider hierarchical models including selection models under a skewed heavy tailed error distribution proposed originally by Chen, Dey, and Shao [M. H. Chen, D. K. Dey, Q. M. Shao, A new skewed link model for dichotomous quantal response data, J. Amer. Statist. Assoc. 94 (1983), pp. 1172–1186.] and Branco and Dey [D. Branco and D.K. Dey, A general class of multivariate skew-elliptical distributions, J. Multivariate Anal. 79, pp. 99–113.]. These rich classes of models combine the information of independent studies, allowing investigation of variability both between and within studies and incorporating weight functions. We constructed a detailed computational scheme under skewed normal and skewed Student's t distribution using the MCMC method. Bayesian model selection was conducted by Bayes factor under a different skewed error. Finally, we illustrated our methodology using a real data example taken from Johnson [M.F. Johnson, Comparative efficacy of Naf and SMFP dentifrices in caries prevention: a meta-analysis overview, J Eur. Organ. Caries Res. 27 (1993), pp. 328–336.].  相似文献   

12.
Reviews     
Abstract

Elizabeth Parang reviews Reengineering the Library: Issues in Electronic Resources Management. Scott Johnson reviews Academic Library Management: Case Studies.  相似文献   

13.
An examination of sampling inspection models is provided for the case where inspection is not perfect and classification errors can be made. The conjugate family of distributions is obtained under t h e assumption that defective items are generated according to a Bernoulli process. To simplify analysis, the use of a single beta prior distribution is considered. Relevant predictive distributions are obtained.  相似文献   

14.
This paper introduces practical methods of parameter and standard error estimation for adaptive robust regression where errors are assumed to be from a normal/independent family of distributions. In particular, generalized EM algorithms (GEM) are considered for the two cases of t and slash families of distributions. For the t family, a one step method is proposed to estimate the degree of freedom parameter. Use of empirical information is suggested for standard error estimation. It is shown that this choice leads to standard errors that can be obtained as a by-product of the GEM algorithm. The proposed methods, as discussed, can be implemented in most available nonlinear regression programs. Details of implementation in SAS NLIN are given using two specific examples.  相似文献   

15.
Continuing previous work on effects of errors in inspection on group sampling schemes, a modification of Dorfman-Sterrett schemes is studied. The modification consists of reversion to group sampling when a specified number of decisions of nonconformance have occurred in the course of inspection of individual items.  相似文献   

16.
Regretably, there are a number of significant errors in Moschopoulos and Shpak, (2010). Communications in Statistics—Theory and Methods 39:1761–1775. Most are typographic errors that have the potential to cause confusion, and in some instances, have carried through to several equations.  相似文献   

17.
18.
Measures of distributional symmetry based on quantiles, L-moments, and trimmed L-moments are briefly reviewed, and (asymptotic) sampling properties of commonly used estimators considered. Standard errors are estimated using both analytical and computer-intensive methods. Simulation is used to assess results when sampling from some known distributions; bootstrapping is used on sample data to estimate standard errors, construct confidence intervals, and test a hypothesis of distributional symmetry. Symmetry measures based on 2- or 3-trimmed L-moments have some advantages over other measures in terms of their existence. Their estimators are generally well behaved, even in relatively small samples.  相似文献   

19.
In this paper, the scheme of the inspection plan, namely the tightened normal tightened (nT, nN; k) is considered and procedures and necessary tables are developed for the selection of the variables sampling scheme, indexed through crossover point (COP). The importance of COP, the properties and advantages of the operating characteristic curve with respect to COP are studied.  相似文献   

20.
In this paper we shall deal with the acceptance sampling plans when the remainder of rejected lots is inspected. We shall consider two types of LTPD plans- for inspection by variables and for inspection by variables and attributes (all items from the sample are inspected by variables, remainder of rejected lots is inspected by attributes). We shall report on an algorithm allowing the calculation of these plans when the non-central t distribution is used for the operating characteristic. The calculation is considerably difficult, algorithm for non-central t distribution takes several minutes. For the calculation we shall use an original method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号