首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

2.
Suppose there are k(>= 2) treatments and each treatment is a Bernoulli process with binomial sampling. The problem of selecting a random-sized subset which contains the treatment with the largest survival probability (reliability or probability of success) is considered. Based on the ideas from both classical approaches and general Bayesian statistical decision approach, a new subset selection procedure is proposed to solve this kind of problem in both balanced and unbalanced designs. Comparing with the classical procedures, the proposed procedure has a significantly smaller selected subset. The optimal properties and performance of it were examined. The methods of selecting and fitting the priors and the results of Monte Carlo simulations on selected important cases are also studied.  相似文献   

3.
4.
Two questions of interest involving nonparametric multiple comparisons are considered. The first question concerns whether it is appropriate to use a multiple comparison procedure as a test of the equality of k treatments, and if it is, which procedure performs best as a test. Our results show that for smaller k values some multiple comparison procedures perform well as tests. The second question concerns whether a joint ranking or a separate ranking multiple comparison procedure performs better as a test and as a device for treatment separation. We find that the joint ranking procedure does slightly better as a test, but for treatment separation the answer depends on the situation.  相似文献   

5.
In the search for the best of n candidates, two-stage procedures of the following type are in common use. In a first stage, weak candidates are removed, and the subset of promising candidates is then further examined. At a second stage, the best of the candidates in the subset is selected. In this article, optimization is not aimed at the parameter with largest value but rather at the best performance of the selected candidates at Stage 2. Under a normal model, a new procedure based on posterior percentiles is derived using a Bayes approach, where nonsymmetric normal (proper and improper) priors are applied. Comparisons are made with two other procedures frequently used in selection decisions. The three procedures and their performances are illustrated with data from a recent recruitment process at a Midwestern university.  相似文献   

6.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

7.
In most practical situations to which the analysis of variance tests are applied, they do not supply the information that the experimenter aims at. If, for example, in one-way ANOVA the hypothesis is rejected in actual application of the F-test, the resulting conclusion that the true means θ1,…,θk are not all equal, would by itself usually be insufficient to satisfy the experimenter. In fact his problems would begin at this stage. The experimenter may desire to select the “best” population or a subset of the “good” populations; he may like to rank the populations in order of “goodness” or he may like to draw some other inferences about the parameters of interest.

The extensive literature on selection and ranking procedures depends heavily on the use of independence between populations (block, treatments, etc.) in the analysis of variance. In practical applications, it is desirable to drop this assumption or independence and consider cases more general than the normal.

In the present paper, we derive a method to construct optimal (in some sense) selection procedures to select a nonempty subset of the k populations containing the best population as ranked in terms of θi’s which control the size of the selected subset and which maximizes the minimum average probability of selecting the best. We also consider the usual selection procedures in one-way ANOVA based on the generalized least squares estimates and apply the method to two-way layout case. Some examples are discussed and some results on comparisons with other procedures are also obtained.  相似文献   

8.
In this paper, the three-decision procedures to classify p treatments as better than or worse than one control, proposed for normal/symmetric probability models [Bohrer, Multiple three-decision rules for parametric signs. J. Amer. Statist. Assoc. 74 (1979), pp. 432–437; Bohrer et al., Multiple three-decision rules for factorial simple effects: Bonferroni wins again!, J. Amer. Statist. Assoc. 76 (1981), pp. 119–124; Liu, A multiple three-decision procedure for comparing several treatments with a control, Austral. J. Statist. 39 (1997), pp. 79–92 and Singh and Mishra, Classifying logistic populations using sample medians, J. Statist. Plann. Inference 137 (2007), pp. 1647–1657]; in the literature, have been extended to asymmetric two-parameter exponential probability models to classify p(p≥1) treatments as better than or worse than the best of q(q≥1) control treatments in terms of location parameters. Critical constants required for the implementation of the proposed procedure are tabulated for some pre-specified values of probability of no misclassification. Power function of the proposed procedure is also defined and a common sample size necessary to guarantee various pre-specified power levels are tabulated. Optimal allocation scheme is also discussed. Finally, the implementation of the proposed methodology is demonstrated through numerical example.  相似文献   

9.
In this article, a multiple three-decision procedure is proposed to classify p (≥2) treatments as better or worse than the best of q (≥2) control treatments in one way layout. Critical constants required for the implementation of the proposed procedure are tabulated for some pre-specified values of probability of no misclassification. Power function of the proposed procedure is defined and a common sample size necessary to guarantee various pre-specified power levels are tabulated under two optimal allocation schemes. Finally the implementation of the proposed methodology is demonstrated through numerical examples based on real life data.  相似文献   

10.
In this paper subset selection procedures for selecting all treatment populations with means larger than a control population are proposed. The treatments and control are assumed to have a multivariate normal distribution. Various covariance structures are considered. All of the proposed procedures are easily implemented using existing tables of the multivariate normal and multivariate t distributions. Some other procedures which have been proposed require extensive and unavailable tables for their implementation  相似文献   

11.
In this article, the design-oriented two-stage multiple three-decision procedure is proposed to classify a set of normal populations with respect to a control under heteroscedasticity. The statistical tables of percentage points and the power-related design constants, to implement our new two-stage procedure, are given. Sometimes when the sample for the second stage is not available, the one-stage data analysis procedure is proposed. Classifying a treatment better than control when it is actually worse (and vice versa) is known as type III error. Both the two-stage and one-stage procedures control the type III error rate at a specified level. The relationship between the two-stage and one-stage procedures is discussed. Finally, the application of the proposed procedures is illustrated with an example.  相似文献   

12.
There are still some open questions whether the existing step-up procedures for establishing superiority and equivalence of a new treatment compared with several standard treatments can strongly control the type I familywise error rate (FWE) at the designated level. In this paper we modify one of the three step-up procedures suggested by Dunnett and Tamhane (16 (1997) Statist. Med. 2489–2506) and then prove that the modified procedure strongly controls the FWE. The method for evaluating the critical values of the modified procedure is also discussed. A simulation study reveals that the modified procedure is generally more powerful than the original procedure.  相似文献   

13.
To compare several promising product designs, manufacturers must measure their performance under multiple environmental conditions. In many applications, a product design is considered to be seriously flawed if its performance is poor for any level of the environmental factor. For example, if a particular automobile battery design does not function well under temperature extremes, then a manufacturer may not want to put this design into production. Thus, this paper considers the measure of a product's quality to be its worst performance over the levels of the environmental factor. We develop statistical procedures to identify (a near) optimal product design among a given set of product designs, i.e., the manufacturing design that maximizes the worst product performance over the levels of the environmental variable. We accomplish this by intuitive procedures based on the split-plot experimental design (and the randomized complete block design as a special case); split-plot designs have the essential structure of a product array and the practical convenience of local randomization. Two classes of statistical procedures are provided. In the first, the δ-best formulation of selection problems, we determine the number of replications of the basic split-plot design that are needed to guarantee, with a given confidence level, the selection of a product design whose minimum performance is within a specified amount, δ, of the performance of the optimal product design. In particular, if the difference between the quality of the best and second best manufacturing designs is δ or more, then the procedure guarantees that the best design will be selected with specified probability. For applications where a split-plot experiment that involves several product designs has been completed without the planning required of the δ-best formulation, we provide procedures to construct a ‘confidence subset’ of the manufacturing designs; the selected subset contains the optimal product design with a prespecified confidence level. The latter is called the subset selection formulation of selection problems. Examples are provided to illustrate the procedures.  相似文献   

14.
This paper is concerned primarily with subset selection procedures based on the sample mediansof logistic populations. A procedure is given which chooses a nonempty subset from among kindependent logistic populations, having a common known variance, so that the populations with thelargest location parameter is contained in the subset with a pre‐specified probability. Theconstants required to apply the median procedure with small sample sizes (≤= 19) are tabulated and can also be used to construct simultaneous confidence intervals. Asymptotic formulae are provided for application with larger sample sizes. It is shown that, under certain situations, rules based on the median are substantially more efficient than analogous procedures based either on sample means or on the sum of joint ranks.  相似文献   

15.
Procedures are derived for selecting, with controlled probability of error, (1) a subset of populations which contains all populations better than a dual probability/proportion standard and (2) a subset of populations which both contains all populations better than an upper probability/proportion standard and also contains no populations worse than a lower probability/proportion standard. The procedures are motivated by current investigations in the area of computer performance evaluation.  相似文献   

16.
In this paper we propose and study two sequential elimination procedures for selecting all new treatments better than a standard or control treatment. These procedures differ from those previously proposed in that we assume variances are unequal and unknown. Expressions for asymptotic expected sample sizes are given. Confidence intervals associated with the procedures are also discussed.  相似文献   

17.
SUMMARY In regression analysis, a best subset of regressors is usually selected by minimizing Mallows's C statistic or some other equivalent criterion, such as the Akaike lambda information criterion or cross-validation. It is known that the resulting procedure suffers from a lack of consistency that can lead to a model with too many variables. For this reason, corrections have been proposed that yield consistent procedures. The object of this paper is to show that these corrected criteria, although asymptotically consistent, are usually too conservative for finite sample sizes. The paper also proposes a new correction of Mallows's statistic that yields better results. A simulation study is conducted that shows that the proposed criterion performs well in a variety of situations.  相似文献   

18.
Comparison with a standard is a general multiple comparison problem, where each system is required to be compared with a single system, referred to as a ‘standard’, as well as with other alternative systems. Screening procedures specially designed to be used for comparison with a standard have been proposed to find a subset that includes all the systems better than the standard in terms of the expected performance. Selection procedures are derived to determine the best system among a number of systems that are better than the standard, or to select the standard when it is equal to or better than the other alternatives. We develop new procedures for screening and selection through the use of two variance reduction techniques, common random numbers and control variates, which are particularly useful in the context of simulation experiments. Empirical results and a realistic example are also provided to compare our procedures with the existing ones.  相似文献   

19.
We restrict attention to a class of Bernoulli subset selection procedures which take observations one-at-a-time and can be compared directly to the Gupta-Sobel single-stage procedure. For the criterion of minimizing the expected total number of observations required to terminate experimentation, we show that optimal sampling rules within this class are not of practical interest. We thus turn to procedures which, although not optimal, exhibit desirable behavior with regard to this criterion. A procedure which employs a modification of the so-called least-failures sampling rule is proposed, and is shown to possess many desirable properties among a restricted class of Bernoulli subset selection procedures. Within this class, it is optimal for minimizing the number of observations taken from populations excluded from consideration following a subset selection experiment, and asymptotically optimal for minimizing the expected total number of observations required. In addition, it can result in substantial savings in the expected total num¬ber of observations required as compared to a single-stage procedure, thus it may be de¬sirable to a practitioner if sampling is costly or the sample size is limited.  相似文献   

20.
In this paper, we examine the potential determinants of foreign direct investment. For this purpose, we apply new exact subset selection procedures, which are based on idealized assumptions, as well as their possibly more plausible empirical counterparts to an international data set to select the optimal set of predictors. Unlike the standard model selection procedures AIC and BIC, which penalize only the number of variables included in a model, and the subset selection procedures RIC and MRIC, which consider also the total number of available candidate variables, our data-specific procedures even take the correlation structure of all candidate variables into account. Our main focus is on a new procedure, which we have designed for situations where some of the potential predictors are certain to be included in the model. For a sample of 73 developing countries, this procedure selects only four variables, namely imports, net income from abroad, gross capital formation, and GDP per capita. An important secondary finding of our study is that the data-specific procedures, which are based on extensive simulations and are therefore very time-consuming, can be approximated reasonably well by the much simpler exact methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号