首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Selection of the “best” t out of k populations has been considered in the indifferece zone formulation by Bachhofer (1954) and in the subset selection formulation by Carroll, Gupta and Huang (1975). The latter approach is used here to obtain conservative solutions for the goals of selecting (i) all the “good” or (ii) only “good” populations, where “good” means having a location parameter among the largest t. For the case of normal distributions, with common unknown variance, tables are produced for implementing these procedures. Also, for this case, simulation results suggest that the procedure may not be too conservative.  相似文献   

2.
Consider k(k ≥ 2) two-parameter Weibull populations. We want to select a subset of the populations not exceeding m in size such that the subset contains at least ? of the t best populations. We have proposed a procedure which uses either the maximum likelihood estimators or ‘simplified’ linear estimators of the parameters. The estimators are based on type II censored data. The ranking of the populations is done by comparing their reliabilities at a certain fixed time. In selected cases the constants for the procedure are tabulated using Monte Carlo methods.  相似文献   

3.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

4.
In this article, we study the problem of selecting the best population from among several exponential populations based on interval censored samples using a Bayesian approach. A Bayes selection procedure and a curtailed Bayes selection procedure are derived. We show that these two Bayes selection procedures are equivalent. A numerical example is provided to illustrate the application of the two selection procedure. We also use Monte Carlo simulation to study performance of the two selection procedures. The numerical results of the simulation study demonstrate that the curtailed Bayes selection procedure has good performance because it can substantially reduce the duration time of life test experiment.  相似文献   

5.
We consider outcome adaptive phase II or phase II/III trials to identify the best treatment for further development. Different from many other multi-arm multi-stage designs, we borrow approaches for the best arm identification in multi-armed bandit (MAB) approaches developed for machine learning and adapt them for clinical trial purposes. The best arm identification in MAB focuses on the error rate of identification at the end of the trial, but we are also interested in the cumulative benefit of trial patients, for example, the frequency of patients treated with the best treatment. In particular, we consider Top-Two Thompson Sampling (TTTS) and propose an acceleration approach for better performance in drug development scenarios in which the sample size is much smaller than that considered in machine learning applications. We also propose a variant of TTTS (TTTS2) which is simpler, easier for implementation, and has comparable performance in small sample settings. An extensive simulation study was conducted to evaluate the performance of the proposed approach in multiple typical scenarios in drug development.  相似文献   

6.
We revisit the problem of testing homoscedasticity (or, equality of variances) of several normal populations which has applications in many statistical analyses, including design of experiments. The standard text books and widely used statistical packages propose a few popular tests including Bartlett's test, Levene's test and a few adjustments of the latter. Apparently, the popularity of these tests have been based on limited simulation study carried out a few decades ago. The traditional tests, including the classical likelihood ratio test (LRT), are asymptotic in nature, and hence do not perform well for small sample sizes. In this paper we propose a simple parametric bootstrap (PB) modification of the LRT, and compare it against the other popular tests as well as their PB versions in terms of size and power. Our comprehensive simulation study bursts some popularly held myths about the commonly used tests and sheds some new light on this important problem. Though most popular statistical software/packages suggest using Bartlette's test, Levene's test, or modified Levene's test among a few others, our extensive simulation study, carried out under both the normal model as well as several non-normal models clearly shows that a PB version of the modified Levene's test (which does not use the F-distribution cut-off point as its critical value), and Loh's exact test are the “best” performers in terms of overall size as well as power.  相似文献   

7.
We present two new statistics for estimating the number of factors underlying in a multivariate system. One of the two new methods, the original NUMFACT, has been used in high profile environmental studies. The two new methods are first explained from a geometrical viewpoint. We then present an algebraic development and asymptotic cutoff points. Next we present a simulation study that shows that for skewed data the new methods are typically superior to traditional methods and for normally distributed data the new methods are competitive to the best of the traditional methods. We finally show how the methods compare by using two environmental data sets.  相似文献   

8.
In the problem of selecting the best of k populations, Olkin, Sobel, and Tong (1976) have introduced the idea of estimating the probability of correct selection. In an attempt to improve on their estimator we consider anempirical Bayes approach. We compare the two estimators via analytic results and a simulation study.  相似文献   

9.
We are interested in comparing logistic regressions for several test treatments or populations with a logistic regression for a standard treatment or population. The research was motivated by some real life problems, which are discussed as data examples. We propose a step-down likelihood ratio method for declaring differences between the test treatments or populations and the standard treatment or population. Competitors based on the sequentially rejective Bonferroni Wald statistic, sequentially rejective exact Wald statistic and Reiers?l's statistic are also discussed. It is shown that the proposed method asymptotically controls the probability of type I error. A Monte Carlo simulation shows that the proposed method performs well for relatively small sample sizes, outperforming its competitors.  相似文献   

10.
Consider k(k ≥ 2) two-parameter Meibull populations. Using type II censored data we want to select a best population. Me have proposed procedures which can be used for maximum likelihood estimators or simplified linear estimators of Che unknown parameters. The ranking of the populations is done by comparing their reliabilities at certain fixed time or by comparing their 2-eh quantiles. In selected cases, the constants needed for the procedures are tabulated using Monte Carlo methods.  相似文献   

11.
We compare the selection procedure of Levin and Robbins [1981. Selecting the highest probability in binomial or multinomial trials. Proc. Nat. Acad. Sci. USA 78, 4663–4666.] with the procedure of Paulson [1994. Sequential procedures for selecting the best one of k Koopman–Darmois populations. Sequential Analysis 13, 207–220.] to identify the best of several binomial populations with sequential elimination of unlikely candidates. We point out situations in which the Levin–Robbins procedure dominates the Paulson procedure in terms of the duration of the experiment, the expected total number of observations, and the expected number of failures. Because the Levin–Robbins procedure is also easier to implement than Paulson's procedure and gives a tighter guarantee for the probability of correct selection, we conclude that it holds a competitive edge over Paulson's procedure.  相似文献   

12.
The problem of testing for equivalence in clinical trials is restated here in terms of the proper clinical hypotheses and a simple classical frequentist significance test based on the central t distribution is derived. This method is then shown to be more powerful than the methods based on usual (shortest) and symmetric confidence intervals.

We begin by considering a noncentral t statistic and then consider three approximations to it. A simulation is used to compare actual test sizes to the nominal values in crossover and completely randomized designs. A central t approximation was the best. The power calculation is then shown to be based on a central t distribution, and a method is developed for obtaining the sample size required to obtain a specified power. For the approximations, a simulation compares actual powers to those obtained for the t distribution and confirms that the theoretical results are close to the actual powers.  相似文献   

13.
In most practical situations to which the analysis of variance tests are applied, they do not supply the information that the experimenter aims at. If, for example, in one-way ANOVA the hypothesis is rejected in actual application of the F-test, the resulting conclusion that the true means θ1,…,θk are not all equal, would by itself usually be insufficient to satisfy the experimenter. In fact his problems would begin at this stage. The experimenter may desire to select the “best” population or a subset of the “good” populations; he may like to rank the populations in order of “goodness” or he may like to draw some other inferences about the parameters of interest.

The extensive literature on selection and ranking procedures depends heavily on the use of independence between populations (block, treatments, etc.) in the analysis of variance. In practical applications, it is desirable to drop this assumption or independence and consider cases more general than the normal.

In the present paper, we derive a method to construct optimal (in some sense) selection procedures to select a nonempty subset of the k populations containing the best population as ranked in terms of θi’s which control the size of the selected subset and which maximizes the minimum average probability of selecting the best. We also consider the usual selection procedures in one-way ANOVA based on the generalized least squares estimates and apply the method to two-way layout case. Some examples are discussed and some results on comparisons with other procedures are also obtained.  相似文献   

14.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

15.
Consider the problem of estimating the common location parameter of two exponential populations using record data when the scale parameters are unknown. We derive the maximum likelihood estimator (MLE), the modified maximum likelihood estimator (MMLE) and the uniformly minimum variance unbiased estimator (UMVUE) of the common location parameter. Further, we derive a general result for inadmissibility of an equivariant estimator under the scaled-squared error loss function. Using this result, we conclude that the MLE and the UMVUE are inadmissible and better estimators are provided. A simulation study is conducted for comparing the performances of various competing estimators.  相似文献   

16.
In this paper, we consider classification procedures for exponential populations when an order on the populations parameters is known. We define and study the behavior of a classification rule which takes into account the additional information and outperforms the likelihood-ratio-based rule when two populations are considered. Moreover, we study the behavior of this rule in each of the two populations and compare the misclassification probabilities with the classical ones. Type II censorship, which is usual in practice, is considered and results obtained. The performance for more than two populations is evaluated by simulation.  相似文献   

17.
In many practical situations, a statistical practitioner often faces a problem of classifying an object from one of the segmented (or screened) populations where the segmentation was conducted by a set of screening variables. This paper addresses this problem, proposing and studying yet another optimal rule for classification with segmented populations. A class of q-dimensional rectangle-screened elliptically contoured (RSEC) distributions is considered for flexibly modeling the segmented populations. Based on the properties of the RSEC distributions, a parametric procedure for the segmented classification analysis (SCA) is proposed. This includes motivation for the SCA as well as some theoretical propositions regarding its optimal rule and properties. These properties allow us to establish other important results which include an efficient estimation of the rule by the Monte Carlo expectation–conditional maximization algorithm and an optimal variable selection procedure. Two numerical examples making use of utilizing a simulation study and a real dataset application and advocating the SCA procedure are also provided.  相似文献   

18.
In this paper, we propose a nonparametric test for homogeneity of overall variabilities for two multi-dimensional populations. Comparisons between the proposed nonparametric procedure and the asymptotic parametric procedure and a permutation test based on standardized generalized variances are made when the underlying populations are multivariate normal. We also study the performance of these test procedures when the underlying populations are non-normal. We observe that the nonparametric procedure and the permutation test based on standardized generalized variances are not as powerful as the asymptotic parametric test under normality. However, they are reliable and powerful tests for comparing overall variability under other multivariate distributions such as the multivariate Cauchy, the multivariate Pareto and the multivariate exponential distributions, even with small sample sizes. A Monte Carlo simulation study is used to evaluate the performance of the proposed procedures. An example from an educational study is used to illustrate the proposed nonparametric test.  相似文献   

19.
Consider k (k >(>)2) Weibull populations. We shall derive a method of constructing optimal selection procedures to select a subset of the k populations containing the best population which control the size of the selected subset and which maximises the minimum probability of making a correct selection. Procedures and results are derived for the case when sample sizes are unequal. Some tables and figures are given at the end of this paper.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号