首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new class of distributions, including the MacGillivray adaptation of the g-and-h distributions and a new family called the g-and-k distributions, may be used to approximate a wide class of distributions, with the advantage of effectively controlling skewness and kurtosis through independent parameters. This separation can be used to advantage in the assessment of robustness to non-normality in frequentist ranking and selection rules. We consider the rule of selecting the largest of several means with some specified confidence. In general, we find that the frequentist selection rule is only robust to small changes in the distributional shape parameters g and k and depends on the amount of flexibility we allow in the specified confidence. This flexibility is exemplified through a quality control example in which a subset of batches of electrical transformers are selected as the most efficient with a specified confidence, based on the sample mean performance level for each batch.  相似文献   

2.
In this paper subset selection procedures for selecting all treatment populations with means larger than a control population are proposed. The treatments and control are assumed to have a multivariate normal distribution. Various covariance structures are considered. All of the proposed procedures are easily implemented using existing tables of the multivariate normal and multivariate t distributions. Some other procedures which have been proposed require extensive and unavailable tables for their implementation  相似文献   

3.
Cicchitelli (1989) conducted an extensive Monte Carlo study to investigate the robustness of the one sample T-statistic under non-normal parent populations. He considered a rich family of distributions, viz., the generalized λ-distribution which was introduced by Ramberg et al. (1979), as the family of parent populations. We shall address and reinforce his empirical findings by means of Edgeworth expansion of the T-statistic. As the skewness of the parent population affects the T-statistic more than the kurtosis, Johnson (1978) suggested a modification to the T-statistic to reduce the effect of skewness. We investigate the performance of this modified T-statistic under the same family of distributions as Cicchitelli considered by means of a Monte Carlo study and give some recommendations on its use.  相似文献   

4.
We restrict attention to a class of Bernoulli subset selection procedures which take observations one-at-a-time and can be compared directly to the Gupta-Sobel single-stage procedure. For the criterion of minimizing the expected total number of observations required to terminate experimentation, we show that optimal sampling rules within this class are not of practical interest. We thus turn to procedures which, although not optimal, exhibit desirable behavior with regard to this criterion. A procedure which employs a modification of the so-called least-failures sampling rule is proposed, and is shown to possess many desirable properties among a restricted class of Bernoulli subset selection procedures. Within this class, it is optimal for minimizing the number of observations taken from populations excluded from consideration following a subset selection experiment, and asymptotically optimal for minimizing the expected total number of observations required. In addition, it can result in substantial savings in the expected total num¬ber of observations required as compared to a single-stage procedure, thus it may be de¬sirable to a practitioner if sampling is costly or the sample size is limited.  相似文献   

5.
Adaptive procedures proposed by Hogg are based on selector statistics for the skewness and the tails. The asymptotic properties of several proposed selector statistics are investigated. Since ail these statistics have under some assumptions asymptoti¬cally a normal distribution, their properties depend on the asymptotic bias and variance. The reasonable concept to compare the different selector statistics is based on the selection probabil¬ities in discriminating the type of the underlying distribution. These values are numerically calculated and analyzed in detail for a number of underlying distributions.  相似文献   

6.
The usual formulation of subset selection due to Gupta (1956) requires a minimum guaranteed probability of a correct selection. The modified formulation of the present paper includes an additional requirement that the expected number of the nonbest populations be bounded above by a specified constant when the best and the next best populations are ‘sufficiently’ apart. A class of procedures is defined and the determination of the minimum sample size required is discussed. The specific problems discussed for normal populations include selection in terms of means and variances, and selection in terms of treatment effects in a two-way layout.  相似文献   

7.
In this paper, we restrict attention to the problem of subset selection of normal populations. The approaches and results of some previous comparison studies of subset selection procedures are discussed briefly. And then the result of a new Monte Carlo study comparing the performance of two classical procedures and the Bayes procedure is presented.  相似文献   

8.
Selection of the “best” t out of k populations has been considered in the indifferece zone formulation by Bachhofer (1954) and in the subset selection formulation by Carroll, Gupta and Huang (1975). The latter approach is used here to obtain conservative solutions for the goals of selecting (i) all the “good” or (ii) only “good” populations, where “good” means having a location parameter among the largest t. For the case of normal distributions, with common unknown variance, tables are produced for implementing these procedures. Also, for this case, simulation results suggest that the procedure may not be too conservative.  相似文献   

9.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

10.
Most multivariate measures of skewness in the literature measure the overall skewness of a distribution. These measures were designed for testing the hypothesis of distributional symmetry; their relevance for describing skewed distributions is less obvious. In this article, the authors consider the problem of characterizing the skewness of multivariate distributions. They define directional skewness as the skewness along a direction and analyze two parametric classes of skewed distributions using measures based on directional skewness. The analysis brings further insight into the classes, allowing for a more informed selection of classes of distributions for particular applications. The authors use the concept of directional skewness twice in the context of Bayesian linear regression under skewed error: first in the elicitation of a prior on the parameters of the error distribution, and then in the analysis of the skewness of the posterior distribution of the regression residuals.  相似文献   

11.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

12.
This paper is concerned primarily with subset selection procedures based on the sample mediansof logistic populations. A procedure is given which chooses a nonempty subset from among kindependent logistic populations, having a common known variance, so that the populations with thelargest location parameter is contained in the subset with a pre‐specified probability. Theconstants required to apply the median procedure with small sample sizes (≤= 19) are tabulated and can also be used to construct simultaneous confidence intervals. Asymptotic formulae are provided for application with larger sample sizes. It is shown that, under certain situations, rules based on the median are substantially more efficient than analogous procedures based either on sample means or on the sum of joint ranks.  相似文献   

13.
The Schlömilch transformation, long used by mathematicians for integral evaluation, allows probability mass to be redistributed, thus transforming old distributions to new ones. The transformation is used to introduce some new families of distributions on +. Their general properties are studied, i.e., distributional shape and skewness, moments and inverse moments, hazard function, and random number generation. In general, these distributions are suitable for modeling data where the hazard function initially rises steeply. Their usefulness is illustrated by fitting some human weight data. Besides data fitting, one possible use of the new distributions could be in sensitivity or robustness studies, for example as Bayesian prior distributions.  相似文献   

14.
Among k independent two-parameter exponential distributions which have the common scale parameter, the lower extreme population (LEP) is the one with the smallest location parameter and the upper extreme population (UEP) is the one with the largest location parameter. Given a multiply type II censored sample from each of these k independent two-parameter exponential distributions, 14 estimators for the unknown location parameters and the common unknown scale parameter are considered. Fourteen simultaneous confidence intervals (SCIs) for all distances from the extreme populations (UEP and LEP) and from the UEP from these k independent exponential distributions under the multiply type II censoring are proposed. The critical values are obtained by the Monte Carlo method. The optimal SCIs among 14 methods are identified based on the criteria of minimum confidence length for various censoring schemes. The subset selection procedures of extreme populations are also proposed and two numerical examples are given for illustration.  相似文献   

15.
A number of robust methods for testing variability have been reported in previous literature. An examination of these procedures for a wide variety of populations confirms their general robustness. Shoemaker's improvement of the F test extends that test use to a realistic variety of population shapes. However, a combination of the Brown–Forsythe and O'Brien methods based on testing kurtosis is shown to be conservative for a wide range of sample sizes and population distributions. The composite test is also shown to be more powerful in most conditions than other conservative procedures.  相似文献   

16.
Abstract

Examining the robustness properties of maximum likelihood (ML) estimators of parameters in exponential power and generalized t distributions has been considered together. The well-known asymptotic properties of ML estimators of location, scale and added skewness parameters in these distributions are studied. The ML estimators for location, scale and scale variant (skewness) parameters are represented as an iterative reweighting algorithm (IRA) to compute the estimates of these parameters simultaneously. The artificial data are generated to examine performance of IRA for ML estimators of parameters simultaneously. We make a comparison between these two distributions to test the fitting performance on real data sets. The goodness of fit test and information criteria approve that robustness and fitting performance should be considered together as a key for modeling issue to have the best information from real data sets.  相似文献   

17.
A technique for selection procedures, called sequential rejection, is investigated. It is shown that this technique is posssible to apply to certain selection goals of the "all or nothing" type, i.e. "selecting a subset containing all good populations" or "selecting a subset containing no bad population". The analogy with existing sequential techniques in the general theory of simultaneous statistical inference is pointed out.  相似文献   

18.
Subset selection procedures based on ranks have been investigated by a number of authors previously. Their methods are based on ranking the samples from all the populations jointly. However, as was pointed out by Rizvi and Woodworth (1970), the procedures they proposed cannot control the probability of a correct selection over the entire parameter space. In this paper, we propose a subset selection procedure based on pairwise rather than joint ranking of the samples. It is shown that this procedure controls the probability of a correct selection over the entire parameter space. It is also shown that the Pitman efficiency of this nonparametric procedure relative to the multivariate t procedure of Gupta (1956, 1965) is the same as the Pitman efficiency of the Mann-Whitney-Wilcoxon test relative to the t-test.  相似文献   

19.
We describe two sequential sampling procedures for Bernoulli subset selection which were shown to exhibit desirable behavior for large-sample problems. These procedures have identical performance characteristics in terms of the number of observations taken from any one of the populations under investigation, but one of the procedures employs one-at-a-time sampling while theother allows observations to be taken in blocks during early stages of experimentation. In this paper, a simulation study of their behavior for small-sample cases (n > 25) reveals that they canresult in a savings (sometimes substantial) in the expected total number of observations requiredto terminate the experiment as compared to single-stage procedures. Hence they may be quite usefulto a practitioner for screening purposes when sampling is limited.  相似文献   

20.
Abstract

The MaxEWMA chart has recently been introduced as an improvement over the standard EWMA chart for detecting changes in the mean and/or standard deviation of a normally distributed process. Although this chart was originally developed for normally distributed process data, its robustness to violations of the normality assumption is the central theme of this study. For data distributions with heavy tails or displaying strong skewness, the in-control average run lengths (ARLs) for the MaxEWMA chart are shown to be significantly shorter than expected. On the other hand, out-of-control ARLs are comparable to normal theory values for a variety of symmetric non-normal distributions. The MaxEWMA chart is not robust to skewness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号