首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of constructing a confidence interval of ‘preassigned width and coverage probability’ considered by Costanza/ Hamdy and Son(1986) is further analyzed. Several multi-stage estimation procedures [ like, purely sequential, accelerated sequential and three-stage procedures ] are utilized to deal with the same estimation problem. The relative advantages and disadvantages of these procedures are discussed.  相似文献   

2.
This paper extends the ideas in Giommi (Proc. 45th Session of the Internat. Statistical Institute, Vol. 2 (1985) 577–578; Techniques d'enquête 13(2) (1987) 137–144) and, in Särndal and Swenson (Bull. Int. Statist. Inst. 15(2) (1985) 1–16; Int. Statist. Rev. 55(1987) 279–294). Given the parallel between a ‘three-phase sampling’ and a ‘sampling with subsequent unit and item nonresponse’, we apply results from three-phase sampling theory to nonresponse situation. To handle the practical problem of unknown distributions at the second and the third phases of selection (the response mechanisms) in the nonresponse case, we use two approaches of response probability estimation: response homogeneity groups (RHG) model (Särndal and Swenson, 1985, 1987) and the nonparametric estimation (Giommi, 1985, 1987). To motivate the three-phase selection, imputation procedures for item nonresponse are used with the RHG model for unit nonresponse. By means of a Monte Carlo study, we find that the regression-type estimators are the most precise of those studied under the two approaches of response probability estimation in terms of lower bias, mean square error and variance; variance estimator close to the true variance and achieved coverage rates closer to the nominal levels. The simulation study shows how poor the variance estimators are under the single imputation approach currently used to handle the problem of missing values.  相似文献   

3.
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together, we call these conditions a ‘rejection principle for sequential tests’, which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next, the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and finding the maximum safe dose of a treatment.  相似文献   

4.
For the Poisson a posterior distribution for the complete sample size, N, is derived from an incomplete sample when any specified subset of the classes are missing.Means as well as other posterior characteristics of N are obtained for two examples with various classes removed. For the special case of a truncated ‘missing zero class’ Poisson sample a simulation experiment is performed for the small ‘N=25’ sample situation applying both Bayesian and maximum likelihood methods of estimation.  相似文献   

5.
This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small‐sample and large‐sample properties of both two‐sided and one‐sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score‐based methods on the whole have the best two‐sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand‐calculated method, the Brown‐Li ‘Jeffreys’ method appears to perform reasonably well, and in most situations, it has better one‐sided coverage than the widely recommended alternatives. In general, we find that the one‐sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non‐inferiority test, the type I error rate (which is equal to the one‐sided non‐coverage probability) can be inflated. The only exception is the Gart‐Nam ‘skewness‐corrected’ method, which we express using modified notation in order to include a bias correction for improved small‐sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart‐Nam interval. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Ensuring a standard of assessment in situations where marking panels are used is fraught with difficulties, particularly where essay-type responses are to be marked. This paper discusses statistical process control procedures, similar to those used in industry, to help moderate marking quality when ‘double-marking’ or ‘partial double-marking’ are used. When questions are assessed by the same two markers, the scores assigned to responses by each marker may be adjusted so that their assessments are on average equal in terms of location and scale. The paper also discusses methods of controlling sequential assessment, and demonstrates the application of these techniques in evaluating marker consistency, using data from school leaving examinations in geography.  相似文献   

7.
In this paper, we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by Es, Gugushvili, and Spreij [(2008), ‘Deconvolution for an atomic distribution’, Electronic Journal of Statistics, 2, 265–297] and Gugushvili, Es, and Spreij [(2011), ‘Deconvolution for an atomic distribution: rates of convergence’, Journal of Nonparametric Statistics, 23, 1003–1029]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some convergence rates that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on the simulated and real biological data.  相似文献   

8.
The fiducial approach to the two components of variance random effects model developed by Venables and James (1978) is related to the Bayesian approach of Box and Tiao (1973). The operating characteristics, under repeated sampling, of the resulting interval estimators for the “within classes” variance component are investigated, and the behaviour of the two sets of intervals is found to be very similar, the coverage frequency of 95% probability intervals being approximately 91% when the “between classes” variance component is zero but rising rapidly to 95% as the between component increases. The probability intervals are shown to be shorter on average than a comparable confidence interval based upon the within classes sum of squares, and to be robust against nonnormality in the class means.  相似文献   

9.
In this paper we have developed tests for bivariate exponentiaIity against the ‘bivariate decreasing mean residual life (BDMRL)’ and ‘bivariate new better than used in expectation (BNBUE)’ classes of non-exponentia1 probability distributions. We have also obtained a large-sample approximation to make the test readily applicable.  相似文献   

10.
Pairwise comparisons for proportions estimated by pooled testing   总被引:1,自引:0,他引:1  
When estimating the prevalence of a rare trait, pooled testing can confer substantial benefits when compared to individual testing. In addition to screening experiments for infectious diseases in humans, pooled testing has also been exploited in other applications such as drug testing, epidemiological studies involving animal disease, plant disease assessment, and screening for rare genetic mutations. Within a pooled-testing context, we consider situations wherein different strata or treatments are to be compared with the goals of assessing significant and practical differences between strata and ranking strata in terms of prevalence. To achieve these goals, we first present two simultaneous pairwise interval estimation procedures for use with pooled data. Our procedures rely on asymptotic results, so we investigate small-sample behavior and compare the two procedures in terms of simultaneous coverage probability and mean interval length. We then present a unified approach to determine pool sizes which deliver desired coverage properties while taking testing costs and interval precision into account. We illustrate our methods using data from an observational HIV study involving heterosexual males who use intravenous drugs.  相似文献   

11.
Two-stage procedures are introduced to control the width and coverage (validity) of confidence intervals for the estimation of the mean, the between groups variance component and certain ratios of the variance components in one-way random effects models. The procedures use the pilot sample data to estimate an “optimal” group size and then proceed to determine the number of groups by a stopping rule. Such sampling plans give rise to unbalanced data, which are consequently analyzed by the harmonic mean method. Several asymptotic results concerning the proposed procedures are given along with simulation results to assess their performance in moderate sample size situations. The proposed procedures were found to effectively control the width and probability of coverage of the resulting confidence intervals in all cases and were also found to be robust in the presence of missing observations. From a practical point of view, the procedures are illustrated using a real data set and it is shown that the resulting unbalanced designs tend to require smaller sample sizes than is needed in a corresponding balanced design where the group size is arbitrarily pre-specified.  相似文献   

12.
A model for media exposure probabilities is developed which has the joint probability of exposure proportional to the product of the marginal probabilities. The model is a generalization of Goodhardt & Ehrenberg's ‘duplication of viewing law’, with the duplication constant computed from a truncated canonical expansion of the joint exposure probability. The proposed model is compared on the basis of estimation accuracy and computation speed with an accurate and quick ‘approximate’ log-linear model (as noted previously)and the popular Metheringham beta-binomial model. Our model is shown to be more accurate than the approximate log-linear model and four times faster. In addition, it is much more accurate than Metheringham's model.  相似文献   

13.
This study investigates the small sample powers of several tests designed against ordered location alternatives in randomized block experiments. The results are intended to aid the researcher in the selection process. Toward this end the small sample powers of three classes of rank tests — tests based on ‘within-blocks’ rankings (W-tests), ‘among-b locks’ rankings (A-tests), and ‘ranking after alignment’ within blocks (RAA-tests)— are compared and contrasted with the asymptotic properties given by Pirie (1974) as well as with the empirical powers of competing parametric procedures.  相似文献   

14.
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the ‘mark.’ The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 ‘Step’ HIV VE trial.  相似文献   

15.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   

16.
In a 1965 Decision Theory course at Stanford University, Charles Stein began a digression with “an amusing problem”: is there a proper confidence interval for the mean based on a single observation from a normal distribution with both mean and variance unknown? Stein introduced the interval with endpoints ± c|X| and showed indeed that for c large enough, the minimum coverage probability (over all values for the mean and variance) could be made arbitrarily near one. While the problem and coverage calculation were in the author’s hand-written notes from the course, there was no development of any optimality result for the interval. Here, the Hunt–Stein construction plus analysis based on special features of the problem provides a “minimax” rule in the sense that it minimizes the maximum expected length among all procedures with fixed coverage (or, equivalently, maximizes the minimal coverage among all procedures with a fixed expected length). The minimax rule is a mixture of two confidence procedures that are equivariant under scale and sign changes, and are uniformly better than the classroom example or the natural interval X ± c|X|?.  相似文献   

17.
The problem of sequentially estimating a location parameter is considered in the special case when the data arrive at random times. Certain classes of sequential estimation procedures are derived under a location invariant loss function and with the observation cost determined by a function of the moment of stopping and the number of observations up to this moment.  相似文献   

18.
In this paper we derive two likelihood-based procedures for the construction of confidence limits for the common odds ratio in K 2 × 2 contingency tables. We then conduct a simulation study to compare these procedures with a recently proposed procedure by Sato (Biometrics 46 (1990) 71–79), based on the asymptotic distribution of the Mantel-Haenszel estimate of the common odds ratio. We consider the situation in which the number of strata remains fixed (finite), but the sample sizes within each stratum are large. Bartlett's score procedure based on the conditional likelihood is found to be almost identical, in terms of coverage probabilities and average coverage lengths, to the procedure recommended by Sato, although the score procedure has some edge, in some instances, in terms of average coverage lengths. So, for ‘fixed strata and large sample’ situation Bartlett's score procedure can be considered as an alternative to the procedure proposed by Sato, based on the asymptotic distribution of the Mantel-Haenszel estimator of the common odds ratio.  相似文献   

19.
This study constructs a simultaneous confidence region for two combinations of coefficients of linear models and their ratios based on the concept of generalized pivotal quantities. Many biological studies, such as those on genetics, assessment of drug effectiveness, and health economics, are interested in a comparison of several dose groups with a placebo group and the group ratios. The Bonferroni correction and the plug-in method based on the multivariate-t distribution have been proposed for the simultaneous region estimation. However, the two methods are asymptotic procedures, and their performance in finite sample sizes has not been thoroughly investigated. Based on the concept of generalized pivotal quantity, we propose a Bonferroni correction procedure and a generalized variable (GV) procedure to construct the simultaneous confidence regions. To address a genetic concern of the dominance ratio, we conduct a simulation study to empirically investigate the probability coverage and expected length of the methods for various combinations of sample sizes and values of the dominance ratio. The simulation results demonstrate that the simultaneous confidence region based on the GV procedure provides sufficient coverage probability and reasonable expected length. Thus, it can be recommended in practice. Numerical examples using published data sets illustrate the proposed methods.  相似文献   

20.
This article considers the construction of level 1?α fixed width 2d confidence intervals for a Bernoulli success probability p, assuming no prior knowledge about p and so p can be anywhere in the interval [0, 1]. It is shown that some fixed width 2d confidence intervals that combine sequential sampling of Hall [Asymptotic theory of triple sampling for sequential estimation of a mean, Ann. Stat. 9 (1981), pp. 1229–1238] and fixed-sample-size confidence intervals of Agresti and Coull [Approximate is better than ‘exact’ for interval estimation of binomial proportions, Am. Stat. 52 (1998), pp. 119–126], Wilson [Probable inference, the law of succession, and statistical inference, J. Am. Stat. Assoc. 22 (1927), pp. 209–212] and Brown et al. [Interval estimation for binomial proportion (with discussion), Stat. Sci. 16 (2001), pp. 101–133] have close to 1?α confidence level. These sequential confidence intervals require a much smaller sample size than a fixed-sample-size confidence interval. For the coin jamming example considered, a fixed-sample-size confidence interval requires a sample size of 9457, while a sequential confidence interval requires a sample size that rarely exceeds 2042.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号