首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Selection from k independent populations of the t (< k) populations with the smallest scale parameters has been considered under the Indifference Zone approach by Bechhofer k Sobel (1954). The same problem has been considered under the Subset Selection approach by Gupta & Sobel (1962a) for the normal variances case and by Carroll, Gupta & Huang (1975) for the more general case of stochastically increasing distributions. This paper uses the Subset Selection approach to place confidence bounds on the probability of selecting all “good” populations, or only “good” populations, for the Case of scale parameters, where a “good” population is defined to have one of the t smallest scale parameters. This is an extension of the location parameter results obtained by Bofinger & Mengersen (1986). Special results are obtained for the case of selecting normal populations based on variances and the necessary tables are presented.  相似文献   

2.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

3.
In experiments designed to estimate a binomial parameter, sample sizes are often calculated to ensure that the point estimate will be within a desired distance from the true value with sufficiently high probability. Since exact calculations resulting from the standard formulation of this problem can be difficult, “conservative” and/or normal approximations are frequently used. In this paper, some problems with the current formulation are given, and a modified criterion that leads to some improvement is provided. A simple algorithm that calculates the exact sample sizes under the modified criterion is provided, and these sample sizes are compared to those given by the standard approximate criterion, as well as to an exact conservative Bayesian criterion.  相似文献   

4.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

5.
In most practical situations to which the analysis of variance tests are applied, they do not supply the information that the experimenter aims at. If, for example, in one-way ANOVA the hypothesis is rejected in actual application of the F-test, the resulting conclusion that the true means θ1,…,θk are not all equal, would by itself usually be insufficient to satisfy the experimenter. In fact his problems would begin at this stage. The experimenter may desire to select the “best” population or a subset of the “good” populations; he may like to rank the populations in order of “goodness” or he may like to draw some other inferences about the parameters of interest.

The extensive literature on selection and ranking procedures depends heavily on the use of independence between populations (block, treatments, etc.) in the analysis of variance. In practical applications, it is desirable to drop this assumption or independence and consider cases more general than the normal.

In the present paper, we derive a method to construct optimal (in some sense) selection procedures to select a nonempty subset of the k populations containing the best population as ranked in terms of θi’s which control the size of the selected subset and which maximizes the minimum average probability of selecting the best. We also consider the usual selection procedures in one-way ANOVA based on the generalized least squares estimates and apply the method to two-way layout case. Some examples are discussed and some results on comparisons with other procedures are also obtained.  相似文献   

6.
Theory has been developed to provide an optimum estimator of the population mean based on a “mean per unit” estimator and the estimated standard deviation, assuming that the form of the distribution as well as its coefficient of variation (c.v.) are known. Theory has been extended to the case when an estimate of c.v. is available from an independent sample drawn in the past; the case when the form of the distribution is not known is also discussed. It is shown that the relative efficiency of the estimator with respect to “mean per unit estimator” is generally high for normal or near normal populations. For log-normal populations, an increase in efficiency of about 17 percent can be achieved. The results have been illustrated with data from biological populations.  相似文献   

7.
Consider k independent observations Yi (i= 1,., k) from two-parameter exponential populations i with location parameters μ and the same scale parameter If the μi are ranked as consider population as the “worst” population and IIp(k) as the “best” population (with some tagging so that p{) and p(k) are well defined in the case of equalities). If the Yi are ranked as we consider the procedure, “Select provided YR(k) Yr(k) is sufficiently large so that is demonstrably better than the other populations.” A similar procedure is studied for selecting the “demonstrably worst” population.  相似文献   

8.
In this paper confidence sequences are used to construct sequential procedures for selecting the population with the a common variance. These procedures are shown to provide substantial saving, particularly in the expected samplw sizes of the inferior populations,over various procedures in the literature. A new “indifference zone” formulation is given for the correct selection probability requirement, and confidence sequences are also applied to construct sequential procedures for this new selection goal.  相似文献   

9.
The selection of the “best” of k populations and subsequent prediction for this population versus “the rest” is compared with the Newman-Keuls multiple comparison procedure for “separating” populations.  相似文献   

10.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

11.
In this paper we study the procedures of Dudewicz and Dalal ( 1975 ), and the modifications suggested by Rinott ( 1978 ), for selecting the largest mean from k normal populations with unknown variances. We look at the case k = 2 in detail, because there is an optimal allocation scheme here. We do not really allocate the total number of samples into two groups, but we estimate this optimal sample size, as well, so as to guarantee the probability of correct selection (written as P(CS)) at least P?, 1/2 < P? < 1 . We prove that the procedure of Rinott is “asymptotically in-efficient” (to be defined below) in the sense of Chow and Robbins ( 1965 ) for any k  2. Next, we propose two-stage procedures having all the properties of Rinott's procedure, together with the property of “asymptotic efficiency” - which is highly desirable.  相似文献   

12.
For two-parameter exponential populations with the same scale parameter (known or unknown) comparisons are made between the location parameters. This is done by constructing confidence intervals, which can then be used for selection procedures. Comparisons are made with a control, and with the (unknown) “best” or “worst” population. Emphasis is laid on finding approximations to the confidence so that calculations are simple and tables are not necessary. (Since we consider unequal sample sizes, tables for exact values would need to be extensive.)  相似文献   

13.
Two nonparametric classification rules for e-univariace populations are proposed, one in which the probability of correct classification is a specified number and the other in which one has to evaluate the probability of correct classification. In each case the classification is with respect to the Chernoff and Savage (1958) class of statistics, with possible specialization to populations having different location shifts and different changes of scale. An optimum property, namely the consistency of the classification procedure, is established for the second rule, when the distributions are either fixed or “near” in the Pitman sense and are tending to a common distribution at a specified rate. A measure of asymptotic efficiency is defined for the second rule and its asymptotic efficiency based on the Chernoff-Savage class of statistics relative to the parametric competitors ie the case of location shifts and scale changes is shown to be equal to the analogous Pitman efficiency.  相似文献   

14.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

15.
A Bayesian formulation of the canonical form of the standard regression model is used to compare various Stein-type estimators and the ridge estimator of regression coefficients, A particular (“constant prior”) Stein-type estimator having the same pattern of shrinkage as the ridge estimator is recommended for use.  相似文献   

16.
In this article, sequential order statistics (SOS) coming from heterogeneous exponential distributions are considered. Maximum likelihood and Bayesian estimates of parameters are derived on the basis of multiple SOS samples. Admissibility of the Bayes estimates are discussed and proved by the well-known Blyth’s lemma. Based on the available data, confidence intervals and highest posterior density credible sets are obtained. The generalized likelihood ratio (GLRT) and the Bayesian tests (under the “0 ? K” loss function) are derived for testing homogeneity of the exponential populations. It is shown that the GLRT in this case is scale invariant. Some guidelines for deriving the uniformly most powerful scale-invariant test (if exists) are also given.  相似文献   

17.
The name “multicollinearity” was first introduced by Ragnar Frisch [2]. In his original formulation the economic variables are supposed to be composed of two parts, a systematic or “true” and an “error” component. There are at least two other cases when the same type of indeterminancy of the estimates arises due to different reasons. Considerable attention was given to this problem which arises when some or all the variables in a regression equation are highly inter-correlated and it becomes almost impossible to separate their influences and obtain the corresponding estimates of the regression coefficients. Consider a linear regression model  相似文献   

18.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   

19.
The need to establish the relative superiority of each treatment when compared to all the others, i.e., ordering the underlying populations according to some pre-specified criteria, often occurs in many applied research studies and technical/business problems. When populations are multivariate in nature, the problem may become quite difficult to deal with especially in case of small sample sizes or unreplicated designs. The purpose of this work is to propose a new approach for the problem of ranking several multivariate normal populations. It will be theoretically argued and numerically proved that our method controls the risk of false ranking classification under the hypothesis of population homogeneity while under the nonhomogeneity alternatives we expect that the true rank can be estimated with satisfactory accuracy, especially for the “best” populations. Our simulation study proved also that the method is robust in the case of moderate deviations from multivariate normality. Finally, an application to a real case study in the field of life cycle assessment is proposed to highlight the practical relevance of the proposed methodology.  相似文献   

20.
Suppose that one wishes to rank k normal populations, each with common variance σ2 and unknown means θi (i=1,2,…,k). Independent samples of size n are taken from each population, and the sample averages are used to rank the populations. In this paper, we investigate what sample sizes, n, are necessary to attain “good” rankings under various loss functions. Section discusses various loss functions and their interpretation. Section 2 gives the solution for a reasonable non-parametric loss function. Section 3 gives the solution for a reasonable parameteric loss function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号