首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Consider that we have a collection of k populations π1, π2…,πk. The quality of the ith population is characterized by a real parameter θi and the population is to be designated as superior or inferior depending on how much the θi differs from θmax = max{θ1, θ2,…,θk}. From the set {π1, π2,…,πk}, we wish to select the subset of superior populations. In this paper we devise rules of selection which have the property that their selected set excludes all the inferior populations with probability at least 1?α, where a is a specified number.  相似文献   

2.
Suppose π1,…,πk are k normal populations with πi having unknown mean μi and unknown variance σ2. The population πi will be called δ?-optimal (or good) if μi is within a specified amountδ? of the largest mean. A two stage procedure is proposed which selects a subset of the k populations and guarantees with probability at least P? that the selected subset contains only δ?-optimal πi ’s. In addition to screening out non-good populations the rule guarantees a high proportion of sufficiently good πi’S will be selected.  相似文献   

3.
Let π01,…,πk be k+1 independent populations. For i=0,1,…,ki has the densit f(xi), where the (unknown) parameter θi belongs to an interval of the real line. Our goal is to select from π1,… πk (experimental treatments) those populations, if any, that are better (suitably defined) than π0 which is the control population. A locally optimal rule is derived in the class of rules for which Pr(πi is selected)γi, i=1,…,k, when θ01=?=θk. The criterion used for local optimality amounts to maximizing the efficiency in a certain sense of the rule in picking out the superior populations for specific configurations of θ=(θ0,…,θk) in a neighborhood of an equiparameter configuration. The general result is then applied to the following special cases: (a) normal means comparison — common known variance, (b) normal means comparison — common unknown variance, (c) gamma scale parameters comparison — known (unequal) shape parameters, and (d) comparison of regression slopes. In all these cases, the rule is obtained based on samples of unequal sizes.  相似文献   

4.
5.
The problem of simultaneously selecting two non-empty subsets, SLand SU, of k populations which contain the lower extreme population (LEP) and the upper extreme population (UEP), respectively, is considered. Unknown parameters θ1,…,θkcharacterize the populations π1,…,πkand the populations associated with θ[1]=min θi. and θ[k]= max θi. are called the LEP and the UEP, respectively. It is assumed that the underlying distributions possess the monotone likelihood ratio property and that the prior distribution of θ= (θ1,…,θk) is exchangeable. The Bayes rule with respect to a general loss function is obtained. Bayes rule with respect to a semi-additive and non-negative loss function is also determined and it is shown that it is minimax and admissible. When the selected subsets are required to be disjoint, it shown that the Bayes rule with respect to a specific loss function can be obtained by comparing certain computable integrals, Application to normal distributions with unknown means θ1,…,θkand a common known variance is also considered.  相似文献   

6.
Let π1,π2,…,πpπ1,π2,,πp be p   independent Poisson populations with means λ1,…,λpλ1,,λp, respectively. Let {X1,…,Xp} denote the set of observations, where Xi is from πiπi. Suppose a subset of populations is selected using Gupta and Huang's (1975) selection rule which selects πiπi if and only if Xi+1?cX(1)Xi+1?cX(1), where X(1)=max{X1,…,Xp}, and 0<c<10<c<1. In this paper, the simultaneous estimation of the Poisson means associated with the selected populations is considered for the k-normalized squared error loss function. It is shown that the natural estimator is positively biased. Also, a class of estimators that are better than the natural estimator is obtained by solving certain difference inequalities over the sample space. A class of estimators which dominate the UMVUE is also obtained. Monte carlo simulations are used to assess the percentage improvements and an application to a real-life example is also discussed.  相似文献   

7.
In drug development, non‐inferiority tests are often employed to determine the difference between two independent binomial proportions. Many test statistics for non‐inferiority are based on the frequentist framework. However, research on non‐inferiority in the Bayesian framework is limited. In this paper, we suggest a new Bayesian index τ = P(π1 > π2 ? Δ0 | X1,X2), where X1 and X2 denote binomial random variables for trials n1 and n2, and parameters π1 and π2, respectively, and the non‐inferiority margin is Δ0 > 0. We show two calculation methods for τ, an approximate method that uses normal approximation and an exact method that uses an exact posterior PDF. We compare the approximate probability with the exact probability for τ. Finally, we present the results of actual clinical trials to show the utility of index τ. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
Let π1,…,πp be p independent normal populations with means μ1…, μp and variances σ21,…, σ2p respectively. Let X(ni) be a simple random sample of size ni from πi, i = 1,…,p. Given the simple random samples X(n1),…, X(np) from π1,…,πp respectively, a test has been proposed for testing the homogeneity of variances H0: σ21=…σ2p, against the restricted alternative, H1: σ21≥…≥σ2p, with at least one strict inequality. Some properties of the test are discussed and critical values are tabulated.  相似文献   

9.
In this study, the problem of estimating the proportion π A of people bearing a sensitive attribute A is considered. Three dichotomous unrelated question mechanisms which are alternative to the well-known Simmons’ model are discussed and their performance is evaluated taking into account both efficiency and respondent privacy protection. The variance of the estimators of π A is compared under equal levels of confidentiality measures introduced by Lanke (1976) and Leysieffer and Warner (1976).  相似文献   

10.
11.
Abstract

Through simulation and regression, we study the alternative distribution of the likelihood ratio test in which the null hypothesis postulates that the data are from a normal distribution after a restricted Box–Cox transformation and the alternative hypothesis postulates that they are from a mixture of two normals after a restricted (possibly different) Box–Cox transformation. The number of observations in the sample is called N. The standardized distance between components (after transformation) is D = (μ2 ? μ1)/σ, where μ1 and μ2 are the component means and σ2 is their common variance. One component contains the fraction π of observed, and the other 1 ? π. The simulation results demonstrate a dependence of power on the mixing proportion, with power decreasing as the mixing proportion differs from 0.5. The alternative distribution appears to be a non-central chi-squared with approximately 2.48 + 10N ?0.75 degrees of freedom and non-centrality parameter 0.174N(D ? 1.4)2 × [π(1 ? π)]. At least 900 observations are needed to have power 95% for a 5% test when D = 2. For fixed values of D, power, and significance level, substantially more observations are necessary when π ≥ 0.90 or π ≤ 0.10. We give the estimated powers for the alternatives studied and a table of sample sizes needed for 50%, 80%, 90%, and 95% power.  相似文献   

12.
We develop a Bayesian procedure for the homogeneity testing problem of r populations using r × s contingency tables. The posterior probability of the homogeneity null hypothesis is calculated using a mixed prior distribution. The methodology consists of choosing an appropriate value of π0 for the mass assigned to the null and spreading the remainder, 1 ? π0, over the alternative according to a density function. With this method, a theorem which shows when the same conclusion is reached from both frequentist and Bayesian points of view is obtained. A sufficient condition under which the p-value is less than a value α and the posterior probability is also less than 0.5 is provided.  相似文献   

13.
14.
In this work we examine the e-contamination model of prior densities γ={π:π=(1-ε)π0(θ)+εq: qεG}, where π0(θ) is the base elicited prior, q is a contamination belonging to some suitable class G and ε reflects the amount of error in π0(θ). Various classes with shape and/or quantile constraints are analysed, and a posterior robust analysis is carried out. It turns out that quantile restrictions alone do not produce asymptotical rational behaviour, so it is unavoidable to introduce shape constraints as well. The conclusions are in line with those of O'Hagan and Berger (1988). Illustrations related to testing hypothesis and likelihood sets are given.  相似文献   

15.
A Bayes-type estimator is proposed for the worth parameter πi and for the treatment effect parameter ln πi in the Bradley-Terry Model for paired comparison. In contrast to current Bayes estimators which require iterative numberical calculations, this estimator has a closed form expression. This estimation technique is also extended to obtain estimators for the Luce Multiple Comparison Model. An application of this technique to a 23 factorial experiment with paired comparisons is presented.  相似文献   

16.
A random vector X = (X 1,…,X n ) is negatively associated if and only if for every pair of partitions X 1 = (X π(1),…,X π(k)), X 2 = (X π(k+1),…,X π(n)) of X , P( X 1 ? A, X 2 ? B) ≤ P( X 1 ? A)P( X 2 ? B) whenever A and B are open upper sets and π is any permutation of {1,…,n}. In this paper, we develop some of concepts of negative dependence, which are weaker than negative association but stronger than negative orthant dependence by requiring the above inequality to hold only for some upper sets A and B and applying the arguments in Shaked.  相似文献   

17.
18.
In this note, we focus on estimating the false discovery rate (FDR) of a multiple testing method with a common, non-random rejection threshold under a mixture model. We develop a new class of estimates of the FDR and prove that it is less conservatively biased than what is traditionally used. Numerical evidence is presented to show that the mean squared error (MSE) is also often smaller for the present class of estimates, especially in small-scale multiple testings. A similar class of estimates of the positive false discovery rate (pFDR) less conservatively biased than what is usually used is then proposed. When modified using our estimate of the pFDR and applied to a gene-expression data, Storey's q-value method identifies a few more significant genes than his original q-value method at certain thresholds. The BH like method developed by thresholding our estimate of the FDR is shown to control the FDR in situations where the p  -values have the same dependence structure as required by the BH method and, for lack of information about the proportion π0π0 of true null hypotheses, it is reasonable to assume that π0π0 is uniformly distributed over (0,1).  相似文献   

19.
The procedure of statistical discrimination Is simple in theory but so simple in practice. An observation x0possibly uiultivariate, is to be classified into one of several populations π1,…,πk which have respectively, the density functions f1(x), ? ? ? , fk(x). The decision procedure is to evaluate each density function at X0 to see which function gives the largest value fi(X0) , and then to declare that X0 belongs to the population corresponding to the largest value. If these den-sities can be assumed to be normal with equal covariance matricesthen the decision procedure is known as Fisher’s linear discrimi-nant function (LDF) method. In the case of unequal covariance matrices the procedure is called the quadratic discriminant func-tion (QDF) method. If the densities cannot be assumed to be nor-mal then the LDF and QDF might not perform well. Several different procedures have appeared in the literature which offer discriminant procedures for nonnormal data. However, these pro-cedures are generally difficult to use and are not readily available as canned statistical programs.

Another approach to discriminant analysis is to use some sortof mathematical trans format ion on the samples so that their distribution function is approximately normal, and then use the convenient LDF and QDF methods. One transformation that:applies to all distributions equally well is the rank transformation. The result of this transformation is that a very simple and easy to use procedure is made available. This procedure is quite robust as is evidenced by comparisons of the rank transform results with several published simulation studies.  相似文献   

20.
Simulated tempering (ST) is an established Markov chain Monte Carlo (MCMC) method for sampling from a multimodal density π(θ). Typically, ST involves introducing an auxiliary variable k taking values in a finite subset of [0,1] and indexing a set of tempered distributions, say π k (θ) π(θ) k . In this case, small values of k encourage better mixing, but samples from π are only obtained when the joint chain for (θ,k) reaches k=1. However, the entire chain can be used to estimate expectations under π of functions of interest, provided that importance sampling (IS) weights are calculated. Unfortunately this method, which we call importance tempering (IT), can disappoint. This is partly because the most immediately obvious implementation is naïve and can lead to high variance estimators. We derive a new optimal method for combining multiple IS estimators and prove that the resulting estimator has a highly desirable property related to the notion of effective sample size. We briefly report on the success of the optimal combination in two modelling scenarios requiring reversible-jump MCMC, where the naïve approach fails.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号