首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 438 毫秒
1.
In this article we discuss multistage group screening in which group-factors contain differing number of factors. We describe a procedure for grouping the factors in the absence of concrete prior information, so that the relative testing cost is minimal. It Is shown that under quite general conditions, these designs will require fewer runs than the equivalent designs in which the group-factors contain same number of factors.  相似文献   

2.
Group testing is a method of pooling a number of units together and performing a single test on the resulting group. Group testing is an appealing option when few individual units are thought to be infected and the cost of the testing is non-negligible. Overdispersion is the phenomenon of having greater variability than predicted by the random component of the model; this is common in the modeling of binomial distribution for group testing. The purpose of this paper is to provide a comparison of several established methods of constructing confidence intervals after adjusting for overdispersion. We evaluate and investigate each method in six different cases of group testing. A method based on the score statistic with correction for skewness is recommended. We illustrate the methods using two data sets, one from the detection of seed transmission and the other from serological testing for malaria.  相似文献   

3.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

4.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

5.
The authors propose a procedure for determining the unknown number of components in mixtures by generalizing a Bayesian testing method proposed by Mengersen & Robert (1996). The testing criterion they propose involves a Kullback‐Leibler distance, which may be weighted or not. They give explicit formulas for the weighted distance for a number of mixture distributions and propose a stepwise testing procedure to select the minimum number of components adequate for the data. Their procedure, which is implemented using the BUGS software, exploits a fast collapsing approach which accelerates the search for the minimum number of components by avoiding full refitting at each step. The performance of their method is compared, using both distances, to the Bayes factor approach.  相似文献   

6.
A sequentially rejective (SR) testing procedure introduced by Holm (1979) and modified (MSR) by Shaffer (1986) is considered for testing all pairwise mean comparisons.For such comparisons, both the SR and MSR methods require that the observed test statistics be ordered and compared, each in turn, to appropriate percentiles on Student's t distribution.For the MSR method these percentiles are based on the maximum number of true null hypotheses remaining at each stage of the sequential procedure, given prior significance at previous stages, A function is developed for determining this number from the number of means being tested and the stage of the test.For a test of all pairwise comparisons, the logical implications which follow the rejection of a null hypothesis renders the MSR procedure uniformly more powerful than the SR procedure.Tables of percentiles for comparing K means, 3 < K < 6, using the MSR method are presented.These tables use Sidak's (1967) multiplicative inequality and simplify the use of t he MSR procedure.Several modifications to the MSR are suggested as a means of further increasing the power for testing the pairwise comparisons.General use of the MSR and the corresponding function for testing other parameters besides the mean is discussed.  相似文献   

7.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

8.
The problem of comparing two independent groups of univariate data in the sense of testing for equivalence is considered for a fully nonparametric setting. The distribution of the data within each group may be a mixture of both a continuous and a discrete component, and no assumptions are made regarding the way in which the distributions of the two groups of data may differ from each other – in particular, the assumption of a shift model is avoided. The proposed equivalence testing procedure for this scenario refers to the median of the independent difference distribution, i.e. to the median of the differences between independent observations from the test group and the reference group, respectively. The procedure provides an asymptotic equivalence test, which is symmetric with respect to the roles of ‘test’ and ‘reference’. It can be described either as a two‐one‐sided‐tests (TOST) approach, or equivalently as a confidence interval inclusion rule. A one‐sided variant of the approach can be applied analogously to non‐inferiority testing problems. The procedure may be generalised to equivalence testing with respect to quantiles other than the median, and is closely related to tolerance interval type inference. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
In the quantitative group testing problem, the use of the group mean to identify if the group maximum is greater than a prefixed threshold (infected group) is analyzed, using n independent and identically distributed individuals. Under these conditions, it is shown that the information of the mean is sufficient to classify each group as infected or healthy with low probability of misclassification when the underline distribution is a unilateral heavy-tailed distribution.  相似文献   

10.
In this paper, we discuss a nonadaptive group testing algorithm that identifies up to two defects. The number of required tests in our algorithm is not optimal, but our search procedure is less complex than that of other well known algorithms using fewer tests. We go on to discuss a simple two stage modification of our algorithm which dramatically reduces the number of sufficient tests.  相似文献   

11.
This article deals with multistage group screening in which group-factors contain the same number of factors. A usual assumption of this procedure is that the directions of possible effects are known. In practice, however, this assumption i s often unreasonable. This paper examines, in the case of no errors in observations, the performance of multistage group screening when this assumption is false . This enails consideration of cancellation effects within group-factors.  相似文献   

12.
We consider the problem of selecting variables in factor analysis models. The $L_1$ regularization procedure is introduced to perform an automatic variable selection. In the factor analysis model, each variable is controlled by multiple factors when there are more than one underlying factor. We treat parameters corresponding to the multiple factors as grouped parameters, and then apply the group lasso. Furthermore, the weight of the group lasso penalty is modified to obtain appropriate estimates and improve the performance of variable selection. Crucial issues in this modeling procedure include the selection of the number of factors and a regularization parameter. Choosing these parameters can be viewed as a model selection and evaluation problem. We derive a model selection criterion for evaluating the factor analysis model via the weighted group lasso. Monte Carlo simulations are conducted to investigate the effectiveness of the proposed procedure. A real data example is also given to illustrate our procedure. The Canadian Journal of Statistics 40: 345–361; 2012 © 2012 Statistical Society of Canada  相似文献   

13.
In the context of large-scale multiple hypothesis testing, the hypotheses often possess certain group structures based on additional information such as Gene Ontology in gene expression data and phenotypes in genome-wide association studies. It is hence desirable to incorporate such information when dealing with multiplicity problems to increase statistical power. In this article, we demonstrate the benefit of considering group structure by presenting a p-value weighting procedure which utilizes the relative importance of each group while controlling the false discovery rate under weak conditions. The procedure is easy to implement and shown to be more powerful than the classical Benjamini-Hochberg procedure in both theoretical and simulation studies. By estimating the proportion of true null hypotheses, the data-driven procedure controls the false discovery rate asymptotically. Our analysis on one breast cancer dataset confirms that the procedure performs favorably compared with the classical method.  相似文献   

14.
This article is devoted to the construction and asymptotic study of adaptive, group‐sequential, covariate‐adjusted randomized clinical trials analysed through the prism of the semiparametric methodology of targeted maximum likelihood estimation. We show how to build, as the data accrue group‐sequentially, a sampling design that targets a user‐supplied optimal covariate‐adjusted design. We also show how to carry out sound statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the independent and identically distributed setting only so far), and how group‐sequential testing applies on top of it. The procedure is robust (i.e. consistent even if the working model is mis‐specified). A simulation study confirms the theoretical results and validates the conjecture that the procedure may also be efficient.  相似文献   

15.
Many neuroscience experiments record sequential trajectories where each trajectory consists of oscillations and fluctuations around zero. Such trajectories can be viewed as zero-mean functional data. When there are structural breaks in higher-order moments, it is not always easy to spot these by mere visual inspection. Motivated by this challenging problem in brain signal analysis, we propose a detection and testing procedure to find the change point in functional covariance. The detection procedure is based on the cumulative sum statistics (CUSUM). The fully functional testing procedure relies on a null distribution which depends on infinitely many unknown parameters, though in practice only a finite number of these parameters can be included for the hypothesis test of the existence of change point. This paper provides some theoretical insights on the influence of the number of parameters. Meanwhile, the asymptotic properties of the estimated change point are developed. The effectiveness of the proposed method is numerically validated in simulation studies and an application to investigate changes in rat brain signals following an experimentally-induced stroke.  相似文献   

16.
Determining group size is a crucial stage before conducting experiments using group testing methods. Considering misclassification, we propose D-criterion and A-criterion to determine a robust group size for screening multiple infections simultaneously. Extensive simulation shows the advantage of the proposed method when the goal is estimation.  相似文献   

17.
In situations where individuals are screened for an infectious disease or other binary characteristic and where resources for testing are limited, group testing can offer substantial benefits. Group testing, where subjects are tested in groups (pools) initially, has been successfully applied to problems in blood bank screening, public health, drug discovery, genetics, and many other areas. In these applications, often the goal is to identify each individual as positive or negative using initial group tests and subsequent retests of individuals within positive groups. Many group testing identification procedures have been proposed; however, the vast majority of them fail to incorporate heterogeneity among the individuals being screened. In this paper, we present a new approach to identify positive individuals when covariate information is available on each. This covariate information is used to structure how retesting is implemented within positive groups; therefore, we call this new approach "informative retesting." We derive closed-form expressions and implementation algorithms for the probability mass functions for the number of tests needed to decode positive groups. These informative retesting procedures are illustrated through a number of examples and are applied to chlamydia and gonorrhea testing in Nebraska for the Infertility Prevention Project. Overall, our work shows compelling evidence that informative retesting can dramatically decrease the number of tests while providing accuracy similar to established non-informative retesting procedures.  相似文献   

18.
This article is concerned with testing multiple hypotheses, one for each of a large number of small data sets. Such data are sometimes referred to as high-dimensional, low-sample size data. Our model assumes that each observation within a randomly selected small data set follows a mixture of C shifted and rescaled versions of an arbitrary density f. A novel kernel density estimation scheme, in conjunction with clustering methods, is applied to estimate f. Bayes information criterion and a new criterion weighted mean of within-cluster variances are used to estimate C, which is the number of mixture components or clusters. These results are applied to the multiple testing problem. The null sampling distribution of each test statistic is determined by f, and hence a bootstrap procedure that resamples from an estimate of f is used to approximate this null distribution.  相似文献   

19.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   

20.
A two-stage group acceptance sampling plan based on a truncated life test is proposed, which can be used regardless of the underlying lifetime distribution when multi-item testers are employed. The decision upon lot acceptance can be made in the first or second stage according to the number of failures from each group. The design parameters of the proposed plan such as number of groups required and the acceptance number for each of two stages are determined independently of an underlying lifetime distribution so as to satisfy the consumer's risk at the specified unreliability. Single-stage group sampling plans are also considered as special cases of the proposed plan and compared with the proposed plan in terms of the average sample number and the operating characteristics. Some important distributions are considered to explain the procedure developed here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号