首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tests for normality can be divided into two groups - those based upon a function of the empirical distribution function and those based upon a function of the original observations. The latter group of statistics test spherical symmetry and not necessarily normality. If the distribution is completely specified then the first group can be used to test for ‘spherical’ normality. However, if the distribution is incompletely specified and F‘‘xi - x’/s’ is used these test statistics also test sphericity rather than normality. A Monte Carlo study was conducted for the completely specified case, to investigate the sensitivity of the distance tests to departures from normality when the alternative distributions are non-normal spherically symmetric laws. A “new” test statistic is proposed for testing a completely specified normal distribution  相似文献   

2.
An account of the behavior of the independent-samples t-test when applied to homoschedastic bivariate normal data is presented, and a comparison is made with the paired-samples t-test. Since the significance level is not violated when applying the independent-samples t-test to data which consist of positively correlated pairs and since the estimate of the variance is based on a larger number of ‘degrees of freedom’, the results suggest that when the sample size is small, one should not worry much about the possible existence of weak positive correlation. One may do better, powerwise, to ignore such correlation and use the independent-samples t-test, as though the samples were independent.  相似文献   

3.
For the implementation of an acceptance sampling plan, a problem the quality practitioners have to deal with is the determination of the critical acceptance values and inspection sample sizes that provide the desired levels of protection to both vendors and buyers. Traditionally, most acceptance sampling plans focus on the percentage of defective products instead of considering the process loss, which doesn't distinguish among the products that fall within the specification limits. However, the quality between products that fall within the specification limits may be very different. So how to design an acceptance sampling plan with process loss consideration is necessary. In this article, a variables sampling plan based on L e is proposed to handle processes requiring low process loss. The required sample sizes n and the critical acceptance value c with various combination of acceptance quality level are tabulated. The proposed sampling plan provides a feasible policy, which can be applied to products requiring low process loss where classical sampling plans cannot be applied.  相似文献   

4.
In this paper, we consider the multivariate normality test based on measure of multivariate sample skewness defined by Srivastava (1984). Srivastava derived asymptotic expectation up to the order N−1 for the multivariate sample skewness and approximate χ2χ2 test statistic, where N   is sample size. Under normality, we derive another expectation and variance for Srivastava's multivariate sample skewness in order to obtain a better test statistic. From this result, improved approximate χ2χ2 test statistic using the multivariate sample skewness is also given for assessing multivariate normality. Finally, the numerical result by Monte Carlo simulation is shown in order to evaluate accuracy of the obtained expectation, variance and improved approximate χ2χ2 test statistic. Furthermore, upper and lower percentiles of χ2χ2 test statistic derived in this paper are compared with those of χ2χ2 test statistic derived by Mardia (1974) which is used multivariate sample skewness defined by Mardia (1970).  相似文献   

5.
This paper introduces a double and group acceptance sampling plans based on time truncated lifetimes when the lifetime of an item follows the inverse log-logistic (ILL) distribution with known shape parameter. The operating characteristic function and average sample number (ASN) values of the double acceptance sampling inspection plan are provided. The values of the minimum number of groups and operating characteristic function for various quality levels are obtained for a group acceptance sampling inspection plan. A comparative study between single acceptance sampling inspection plan and double acceptance sampling inspection plan is carried out in terms of sample size. One simulated example and four real-life examples are discussed to show the applicability of the proposed double and group acceptance sampling inspection plans for ILL distributed quality parameters.  相似文献   

6.
A two-stage group acceptance sampling plan based on a truncated life test is proposed, which can be used regardless of the underlying lifetime distribution when multi-item testers are employed. The decision upon lot acceptance can be made in the first or second stage according to the number of failures from each group. The design parameters of the proposed plan such as number of groups required and the acceptance number for each of two stages are determined independently of an underlying lifetime distribution so as to satisfy the consumer's risk at the specified unreliability. Single-stage group sampling plans are also considered as special cases of the proposed plan and compared with the proposed plan in terms of the average sample number and the operating characteristics. Some important distributions are considered to explain the procedure developed here.  相似文献   

7.
Until now, various acceptance reliability sampling plans have been developed based on different life tests of items. However, the statistical effect of the acceptance sampling tests on the reliability characteristic of the lots accepted in the test has not been appropriately addressed. In this paper, we deal with an acceptance reliability sampling plan under a ‘general framework’ and discuss the corresponding statistical effect of the acceptance sampling tests. The lifetime of the population before the acceptance test and that of population ‘conditional on the acceptance’ in the sampling test are stochastically compared. The improvement of reliability characteristics of the population conditional on the acceptance in the sampling test is precisely analyzed.  相似文献   

8.
A sorting-and-measuring machine (SMM) measures and sorts (classifies) on-line produced items into several groups according to their size. The measuring devices of the SMM perceive the actual item size with a random error ε and classify the item as being smaller than b iff z+ε<b. Here ε is a normal zero-mean r.v. with unknown standard deviation σ which is the main parameter characterizing the precision and technical condition of an SMM. The paper gives the following method of estimating σ. N0 items are measured and N1 of them are recognized by the SMM as belonging to the group a<zb. These N1 items are sorted again and N2 of them return to this group, these are sorted again, and so on. The estimation of σ is based on the statistics Nm/Nn. Moments of the ratio statistics Nm/Nn and their distributional properties are investigated. It turns out that the expected value of Nm/Nn depends almost linearly on σ which allows us to construct ‘almost’ unbiased estimators of type σ?mn=ANm/Nn+B with good propert including robustness with respect to the distribution of item size. Convex combinations of σ?mn statistics are considered to obtain an estimator with minimal variance.  相似文献   

9.
In this paper, a variable repetitive group sampling plans based on one-sided process capability indices is proposed to deal with lot sentencing for one-sided specifications. The parameters of the proposed plans are tabulated for some combinations of acceptance quality levels with commonly used producer's risk and consumer's risk. The efficiency of the proposed plan is compared with the Pearn and Wu [Critical acceptance values and sample sizes of a variables sampling plan for very low fraction of defectives. Omega – Int J Manag Sci. 2006;34(1):90–101] plan in terms of sample size and the power curve. One example is given to illustrate the proposed methodology.  相似文献   

10.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

11.
In this paper, for heavy-tailed models, and working with the sample of the k largest observations, we present probability weighted moments (PWM) estimators for the first order tail parameters. Under regular variation conditions on the right-tail of the underlying distribution function F we prove the consistency and asymptotic normality of these estimators. Their performance, for finite sample sizes, is illustrated through a small-scale Monte Carlo simulation.  相似文献   

12.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

13.
Isotones   are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of pp values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes   of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r)Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate FF test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate FF distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives.  相似文献   

14.
A double acceptance sampling plan for the truncated life test is developed assuming that the lifetime of a product follows a generalized log-logistic distribution with known shape parameters. The zero and one failure scheme is mainly considered, where the lot is accepted if no failures are observed from the first sample and it is rejected if two or more failures occur. When there is one failure from the first sample, the second sample is drawn and tested for the same duration as the first sample. The minimum sample sizes of the first and second samples are determined to ensure that the true median life is longer than the given life at the specified consumer’s confidence level. The operating characteristics are analyzed according to various ratios of the true median life to the specified life. The minimum such ratios are also obtained so as to lower the producer’s risk at the specified level. The results are explained with examples.  相似文献   

15.
We consider the problem of testing the equality of two population means when the population variances are not necessarily equal. We propose a Welch-type statistic, say T* c, based on Tiku!s ‘1967, 1980’ modified maximum likelihood estimators, and show that this statistic is robust to symmetric and moderately skew distributions. We investigate the power properties of the statistic T* c; T* c clearly seems to be more powerful than Yuen's ‘1974’ Welch-type robust statistic based on the trimmed sample means and the matching sample variances. We show that the analogous statistics based on the ‘adaptive’ robust estimators give misleading Type I errors. We generalize the results to testing linear contrasts among k population means  相似文献   

16.
A double sampling plan based on truncated life tests is proposed and designed under a general life distribution. The design parameters such as sample sizes and acceptance numbers for the first and the second samples are determined so as to minimize the average sample number subject to satisfying the consumer's and producer's risks at the respectively specified quality levels. The resultant tables can be used regardless of the underlying distribution as long as the reliability requirements are specified at two risks. In addition, Gamma and Weibull distributions are particularly considered to report the design parameters according to the quality levels in terms of the mean ratios.  相似文献   

17.
Impartial trimming procedures with respect to general ‘penalty’ functions, Φ, have been recently introduced in Cuesta-Albertos et al. (1997. Ann. Statist. 25, 553–576) in the (generalized) k-means framework. Under regularity assumptions, for real-valued samples, we obtain the asymptotic normality both of the impartial trimmed k-mean estimator (Φ(x)=x2) and of the impartial trimmed k-median estimator (Φ(x)=x).In spite of the additional complexity coming from the several groups setting, the empirical quantile methodology used in Butler (1982. Ann. Statist. 10, 197–204) for the LTS estimator (and subsequently in Tableman (1994. Statist. Probab. Lett. 19, 387–398) for the LTAD estimator) also works in our framework. Besides their relevance for the robust estimation of quantizers, our results open the possibility of considering asymptotic distribution-free tolerance regions, constituted by unions of intervals, for predicting a future observation, generalizing the idea in Butler (1982).  相似文献   

18.
The problem of finding confidence regions (CR) for a q-variate vector γ given as the solution of a linear functional relationship (LFR) Λγ = μ is investigated. Here an m-variate vector μ and an m × q matrix Λ = (Λ1, Λ2,…, Λq) are unknown population means of an m(q+1)-variate normal distribution Nm(q+1)(ζΩ?Σ), where ζ′ = (μ′, Λ1′, Λ2′,…, ΛqΣ is an unknown, symmetric and positive definite m × m matrix and Ω is a known, symmetric and positive definite (q+1) × (q+1) matrix and ? denotes the Kronecker product. This problem is a generalization of the univariate special case for the ratio of normal means.A CR for γ with level of confidence 1 ? α, is given by a quadratic inequality, which yields the so-called ‘pseudo’ confidence regions (PCR) valid conditionally in subsets of the parameter space. Our discussion is focused on the ‘bounded pseudo’ confidence region (BPCR) given by the interior of a hyperellipsoid. The two conditions necessary for a BPCR to exist are shown to be the consistency conditions concerning the multivariate LFR. The probability that these conditions hold approaches one under ‘reasonable circumstances’ in many practical situations. Hence, we may have a BPCR with confidence approximately 1 ? α. Some simulation results are presented.  相似文献   

19.
We develop and study in the framework of Pareto-type distributions a general class of kernel estimators for the second order parameter ρρ, a parameter related to the rate of convergence of a sequence of linearly normalized maximum values towards its limit. Inspired by the kernel goodness-of-fit statistics introduced in Goegebeur et al. (2008), for which the mean of the normal limiting distribution is a function of ρρ, we construct estimators for ρρ using ratios of ratios of differences of such goodness-of-fit statistics, involving different kernel functions as well as power transformations. The consistency of this class of ρρ estimators is established under some mild regularity conditions on the kernel function, a second order condition on the tail function 1−F of the underlying model, and for suitably chosen intermediate order statistics. Asymptotic normality is achieved under a further condition on the tail function, the so-called third order condition. Two specific examples of kernel statistics are studied in greater depth, and their asymptotic behavior illustrated numerically. The finite sample properties are examined by means of a simulation study.  相似文献   

20.
Abstract

Quetelet’s data on Scottish chest girths are analyzed with eight normality tests. In contrast to Quetelet’s conclusion that the data are fit well by what is now known as the normal distribution, six of eight normality tests provide strong evidence that the chest circumferences are not normally distributed. Using corrected chest circumferences from Stigler, the χ2 test no longer provides strong evidence against normality, but five commonly used normality tests do. The D’Agostino–Pearson K2 and Jarque–Bera tests, based only on skewness and kurtosis, find that both Quetelet’s original data and the Stigler-corrected data are consistent with the hypothesis of normality. The major reason causing most normality tests to produce low p-values, indicating that Quetelet’s data are not normally distributed, is that the chest circumferences were reported in whole inches and rounding of large numbers of observations can produce many tied values that strongly affect most normality tests. Users should be cautious using many standard normality tests if data have ties, are rounded, and the ratio of the standard deviation to rounding interval is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号