首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider the problem wherein one desires to estimate a linear combination of binomial probabilities from k>2k>2 independent populations. In particular, we create a new family of asymptotic confidence intervals, extending the approach taken by Beal [1987. Asymptotic confidence intervals for the difference between two binomial parameters for use with small samples. Biometrics 73, 941–950] in the two-sample case. One of our new intervals is shown to perform very well when compared to the best available intervals documented in Price and Bonett [2004. An improved confidence interval for a linear function of binomial proportions. Comput. Statist. Data Anal. 45, 449–456]. Furthermore, our interval estimation approach is quite general and could be extended to handle more complicated parametric functions and even to other discrete probability models in stratified settings. We illustrate our new intervals using two real data examples, one from an ecology study and one from a multicenter clinical trial.  相似文献   

2.
In this paper we study the procedures of Dudewicz and Dalal ( 1975 ), and the modifications suggested by Rinott ( 1978 ), for selecting the largest mean from k normal populations with unknown variances. We look at the case k = 2 in detail, because there is an optimal allocation scheme here. We do not really allocate the total number of samples into two groups, but we estimate this optimal sample size, as well, so as to guarantee the probability of correct selection (written as P(CS)) at least P?, 1/2 < P? < 1 . We prove that the procedure of Rinott is “asymptotically in-efficient” (to be defined below) in the sense of Chow and Robbins ( 1965 ) for any k  2. Next, we propose two-stage procedures having all the properties of Rinott's procedure, together with the property of “asymptotic efficiency” - which is highly desirable.  相似文献   

3.
Tong ‘1978’ proposed an adaptive approach as an alternative to the classical indifference-zone formulation of the problems of ranking and selection. With a fixed pre-selected y*‘1/k < y* < 1’ his procedure calls for the termination of vector-at-a-time sampling when the estimated probability of a correct selection exceeds Y* for the first time. The purpose of this note is to show that for the case of two normal populations with common known variance, the expected number of vector-observations required by Tong's procedure to terminate sampling approaches infinity as the two population means approach equality for Y* ≥ 0.8413.It is conjectured that this phenomenon also persists if the two largest of K ≥3 population means approach equality. Since in the typical ranking and selection setting it usually is assumed that the experimenter has no knowledge concerning the differences between the population means, the experimenter who uses Tong's procedure clearly does so at his own risk.  相似文献   

4.
Sequential analyses in clinical trials have ethical and economic advantages over fixed sample size methods. The sequential probability ratio test (SPRT) is a hypothesis testing procedure which evaluates data as it is collected. The original SPRT was developed by Wald for one-parameter families of distributions and later extended by Bartlett to handle the case of nuisance parameters. However, Bartlett's SPRT requires independent and identically distributed observations. In this paper we show that Bartlett's SPRT can be applied to generalized linear model (GLM) contexts. Then we propose an SPRT analysis methodology for a Poisson generalized linear mixed model (GLMM) that is suitable for our application to the design of a multicenter randomized clinical trial that compares two preventive treatments for surgical site infections. We validate the methodology with a simulation study that includes a comparison to Neyman–Pearson and Bayesian fixed sample size test designs and the Wald SPRT.  相似文献   

5.
A subset selection procedure is developed for selecting a subset containing the multinomial population that has the highest value of a certain linear combination of the multinomial cell probabilities; such population is called the ‘best’. The multivariate normal large sample approximation to the multinomial distribution is used to derive expressions for the probability of a correct selection, and for the threshold constant involved in the procedure. The procedure guarantees that the probability of a correct selection is at least at a pre-assigned level. The proposed procedure is an extension of Gupta and Sobel's [14] selection procedure for binomials and of Bakir's [2] restrictive selection procedure for multinomials. One illustration of the procedure concerns population income mobility in four countries: Peru, Russia, South Africa and the USA. Analysis indicates that Russia and Peru fall in the selected subset containing the best population with respect to income mobility from poverty to a higher-income status. The procedure is also applied to data concerning grade distribution for students in a certain freshman class.  相似文献   

6.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

7.
We generalize the factor stochastic volatility (FSV) model of Pitt and Shephard [1999. Time varying covariances: a factor stochastic volatility approach (with discussion). In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statistics, vol. 6, Oxford University Press, London, pp. 547–570.] and Aguilar and West [2000. Bayesian dynamic factor models and variance matrix discounting for portfolio allocation. J. Business Econom. Statist. 18, 338–357.] in two important directions. First, we make the FSV model more flexible and able to capture more general time-varying variance–covariance structures by letting the matrix of factor loadings to be time dependent. Secondly, we entertain FSV models with jumps in the common factors volatilities through So, Lam and Li's [1998. A stochastic volatility model with Markov switching. J. Business Econom. Statist. 16, 244–253.] Markov switching stochastic volatility model. Novel Markov Chain Monte Carlo algorithms are derived for both classes of models. We apply our methodology to two illustrative situations: daily exchange rate returns [Aguilar, O., West, M., 2000. Bayesian dynamic factor models and variance matrix discounting for portfolio allocation. J. Business Econom. Statist. 18, 338–357.] and Latin American stock returns [Lopes, H.F., Migon, H.S., 2002. Comovements and contagion in emergent markets: stock indexes volatilities. In: Gatsonis, C., Kass, R.E., Carriquiry, A.L., Gelman, A., Verdinelli, I. Pauler, D., Higdon, D. (Eds.), Case Studies in Bayesian Statistics, vol. 6, pp. 287–302].  相似文献   

8.
A large sample approximation of the least favorable configuration for a fixed sample size selection procedure for negative binomial populations is proposed. A normal approximation of the selection procedure is also presented. Optimal sample sizes required to be drawn from each population and the bounds for the sample sizes are tabulated. Sample sizes obtained using the approximate least favorable configuration are compared with those obtained using the exact least favorable configuration. Alternate form of the normal approximation to the probability of correct selection is also presented. The relation between the required sample size and the number of populations involved is studied.  相似文献   

9.
This paper presents a selection procedure that combines Bechhofer's indifference zone selection and Gupta's subset selection approaches, by using a preference threshold. For normal populations with common known variance, a subset is selected of all populations that have sample sums within the distance of this threshold from the largest sample sum. We derive the minimal necessary sample size and the value for the preference threshold, in order to satisfy two probability requirements for correct selection, one related to indifference zone selection, the other to subset selection. Simulation studies are used to illustrate the method.  相似文献   

10.
Hedayat et al. [1988a. Sampling plans excluding contiguous units. J. Statist. Plann. Inference 19, 159–170; 1988b. Designs in survey sampling avoiding contiguous units. In: Krishnaiah, P.R., Rao, C.R. (Eds.). Handbook of Statistics, vol. 6. Elsevier, Amsterdam, pp. 575–583] first introduced balanced sampling designs for the exclusion of contiguous units. Sampling plans that excluded the selection of contiguous units within a given sample, while maintaining a constant second-order inclusion probability for non-contiguous units, were investigated for finite populations of N units arranged in a circular, one-dimensional ordering. There remain many open questions about the existence of such plans and their extension to plans excluding adjacent units. We present new generation techniques and new balanced sampling plans for the exclusion of adjacent units under finite, one-dimensional, circularly and linearly ordered populations.  相似文献   

11.
The methodology for deriving the exact confidence coefficient of some confidence intervals for a binomial proportion is proposed in Wang [2007. Exact confidence coefficients of confidence intervals for a binomial proportion. Statist. Sinica 17, 361–368]. The methodology requires two conditions of confidence intervals: the monotone boundary property and the full coverage property. In this paper, we show that for some confidence intervals of a binomial proportion, the two properties hold for any sample size. Based on results presented in this paper, the procedure in Wang [2007. Exact confidence coefficients of confidence intervals for a binomial proportion. Statist. Sinica 17, 361–368] can be directly used to calculate the exact confidence coefficients of these confidence intervals for any fixed sample size.  相似文献   

12.
The situation where k populations are partitioned into one inferior group and one superior group is considered. The statistical problem is to select a random size subset of superior populations while trying to avoid including any inferior populations. A selection procedure is assumed to satisfy the condition that the probability of selecting at least one superior population is bounded below by P1<1. The performance of a procedure is measured by the probability of including an inferior population.The asymptotic performance, as k→∞ of Gupta's traditional maximum type procedure ψG is considered in the location-model. For normally distributed populations, ψG turns out to be asymptotically optimal, provided the size of the inferior group does not become infinitely larger than the size of the superior group.  相似文献   

13.
In this research, we employ Bayesian inference and stochastic dynamic programming approaches to select the binomial population with the largest probability of success from n independent Bernoulli populations based upon the sample information. To do this, we first define a probability measure called belief for the event of selecting the best population. Second, we explain the way to model the selection problem using Bayesian inference. Third, we clarify the model by which we improve the beliefs and prove that it converges to select the best population. In this iterative approach, we update the beliefs by taking new observations on the populations under study. This is performed using Bayesian rule and prior beliefs. Fourth, we model the problem of making the decision in a predetermined number of decision stages using the stochastic dynamic programming approach. Finally, in order to understand and to evaluate the proposed methodology, we provide two numerical examples and a comparison study by simulation. The results of the comparison study show that the proposed method performs better than that of Levin and Robbins (1981 Levin , B. , Robbins , H. ( 1981 ). Selecting the highest probability in Binomial or multinomial trials . Proc. Nat. Acad. Sci. USA 78 : 46634666 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) for some values of estimated probability of making a correct selection.  相似文献   

14.
For a discrete time, second-order stationary process the Levinson–Durbin recursion is used to determine best fitting one-step-ahead linear autoregressive predictors of successively increasing order, best in the sense of minimizing the mean square error. Whittle [1963. On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix. Biometrika 50, 129–134] generalized the recursion to the case of vector autoregressive processes. The recursion defines what is termed a Levinson–Durbin–Whittle sequence, and a generalized Levinson–Durbin–Whittle sequence is also defined. Generalized Levinson–Durbin–Whittle sequences are shown to satisfy summation formulas which generalize summation formulas satisfied by binomial coefficients. The formulas can be expressed in terms of the partial correlation sequence, and they assume simple forms for time-reversible processes. The results extend comparable formulas obtained in Shaman [2007. Generalized Levinson–Durbin sequences, binomial coefficients and autoregressive estimation. Working paper] for univariate processes.  相似文献   

15.
This paper examines the design and performance of sequential experiments where extensive switching is undesirable. Given an objective function to optimize by sampling between Bernoulli populations, two different models are considered. The constraint model restricts the maximum number of switches possible, while the cost model introduces a charge for each switch. Optimal allocation procedures and a new “hyperopic” procedure are discussed and their behavior examined. For the cost model, if one views the costs as control variables then the optimal allocation procedures yield the optimal tradeoff of expected switches vs. expected value of the objective function.  相似文献   

16.
The usual formulation of subset selection due to Gupta (1956) requires a minimum guaranteed probability of a correct selection. The modified formulation of the present paper includes an additional requirement that the expected number of the nonbest populations be bounded above by a specified constant when the best and the next best populations are ‘sufficiently’ apart. A class of procedures is defined and the determination of the minimum sample size required is discussed. The specific problems discussed for normal populations include selection in terms of means and variances, and selection in terms of treatment effects in a two-way layout.  相似文献   

17.
Hedayat et al. [Sampling plans excluding contiguous units. J. Statist. Plann. Inference 19, 159–170, Designs in survey sampling avoiding contiguous units. In: Krishnaiah, P.R., Rao, C.R. (Eds.), Handbook of Statistics, vol. 6. Elsevier, Amsterdam, pp. 575–583] first introduced balanced sampling plans for the exclusion of contiguous units. Sampling plans that excluded the selection of contiguous units within a given sample, while maintaining a constant second-order inclusion probability for non-contiguous units, were investigated for finite populations of N units arranged in a circular, one-dimensional ordering. While significant advancements have been made in the identification and generalizations of such plans—commonly referred to as BSA sampling plans—little is known concerning the extension of such sampling plans to multi-dimensional populations. This paper will present a review of the pertinent results of one-dimensional BSA sampling plans and a discussion concerning the properties of two-dimensional BSA sampling plans.  相似文献   

18.
The problem of finding confidence intervals for the success parameter of a binomial experiment has a long history, and a myriad of procedures have been developed. Most exploit the duality between hypothesis testing and confidence regions and are typically based on large sample approximations. We instead employ a direct approach that attempts to determine the optimal coverage probability function a binomial confidence procedure can have from the exact underlying binomial distributions, which in turn defines the associated procedure. We show that a graphical perspective provides much insight into the problem. Both procedures whose coverage never falls below the declared confidence level and those that achieve that level only approximately are analyzed. We introduce the Length/Coverage Optimal method, a variant of Sterne's procedure that minimizes average length while maximizing coverage among all length minimizing procedures, and show that it is superior in important ways to existing procedures.  相似文献   

19.
Sequential designs can be used to save computation time in implementing Monte Carlo hypothesis tests. The motivation is to stop resampling if the early resamples provide enough information on the significance of the p-value of the original Monte Carlo test. In this paper, we consider a sequential design called the B-value design proposed by Lan and Wittes and construct the sequential design bounding the resampling risk, the probability that the accept/reject decision is different from the decision from complete enumeration. For the B-value design whose exact implementation can be done by using the algorithm proposed in Fay, Kim and Hachey, we first compare the expected resample size for different designs with comparable resampling risk. We show that the B-value design has considerable savings in expected resample size compared to a fixed resample or simple curtailed design, and comparable expected resample size to the iterative push out design of Fay and Follmann. The B-value design is more practical than the iterative push out design in that it is tractable even for small values of resampling risk, which was a challenge with the iterative push out design. We also propose an approximate B-value design that can be constructed without using a specially developed software and provides analytic insights on the choice of parameter values in constructing the exact B-value design.  相似文献   

20.
Robbins (1956) in his original paper on empirical Bayes methods suggested a method of estimating a binomial success probability. We give explicit bounds for the empirical Bayes risk of natural variants of the Robbins estimator that show convergence to an optimal risk at O(n?12) rate. Bounds that yield the same convergence rate are also obtained in the related compound estimation problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号