首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the optimal configuration of a square array group testing algorithm (denoted A2) to minimize the expected number of tests per specimen. For prevalence greater than 0.2498, individual testing is shown to be more efficient than A2. For prevalence less than 0.2498, closed form lower and upper bounds on the optimal group sizes for A2 are given. Arrays of dimension 2 × 2, 3 × 3, and 4 × 4 are shown to never be optimal. The results are illustrated by considering the design of a specimen pooling algorithm for detection of recent HIV infections in Malawi.  相似文献   

2.
For the multi-stage hierarchical Dorfman group testing problem, where inspection errors exist, we provide a simple procedure to determine the number to stages, the number of subgroups of each stage and sample size of each subgroup simultaneously. The optimal group testing plan established by this procedure minimizes the expected number of tests per item. This result can be easily extended to more general applications.  相似文献   

3.
This paper considers a family of penalized likelihood score tests for group variation. The tests can be indexed by a measure of degrees of freedom. At one extreme, with degrees of freedom one less than the number of groups, is the usual score test for a fixed effects alternative using indicator variables for the groups, while at the other extreme, in the limit as the degrees of freedom 0, is a test closely related to a score test based on a random effects alternative. Asymptotic power comparisons are made for the tests in the family. As would be expected, different members of the family are more efficient for different alternatives. Generally the tests with smaller degrees of freedom appear to have better power than the standard test for alternatives focusing on differences among the larger groups, and lower power for alternatives focusing on differences among the smaller groups. Simulations indicate the asymptotic approximation to the distribution performs better for the tests with small degrees of freedom.  相似文献   

4.
Summary.  In high throughput genomic work, a very large number d of hypotheses are tested based on n ≪ d data samples. The large number of tests necessitates an adjustment for false discoveries in which a true null hypothesis was rejected. The expected number of false discoveries is easy to obtain. Dependences between the hypothesis tests greatly affect the variance of the number of false discoveries. Assuming that the tests are independent gives an inadequate variance formula. The paper presents a variance formula that takes account of the correlations between test statistics. That formula involves O ( d 2) correlations, and so a naïve implementation has cost O ( nd 2). A method based on sampling pairs of tests allows the variance to be approximated at a cost that is independent of d .  相似文献   

5.
A diagnostic key defines a hierarchial sequence of tests used to identify a specimen from a set of known taxa. The usual measure of the efficiency of a key, the expected number of tests per identification, may not be appropriate when the responses to tests are not known for all taxon/test pairs. An alternative measure is derived and it is shown that the test selected for use at each point in the sequence should depend on which measure is used. Two suggestions of Gower and Payne (1975), regarding test selection, are shown to be appropriate only to the new measure. Tests are usually selected by calculating the value of some selection criterion function. Functions are reviewed for use in each of the two situations, and new functions are derived. The functions discussed are shown to be interpretable in terms of the number of tests required to complete the key from the current point in the sequence, given that a particular test is selected. This interpretation enables the functions to be extended to select tests with different costs and allows recommendations to be made as to which function to use, depending on how many ‘good’ tests are available.  相似文献   

6.
We consider the optimal configuration of a square array group testing algorithm (denoted A2) to minimize the expected number of tests per specimen. For prevalence greater than 0.2498, individual testing is shown to be more efficient than A2. For prevalence less than 0.2498, closed form lower and upper bounds on the optimal group sizes for A2 are given. Arrays of dimension 2 × 2, 3 × 3, and 4 × 4 are shown to never be optimal. The results are illustrated by considering the design of a specimen pooling algorithm for detection of recent HIV infections in Malawi.  相似文献   

7.
Grønnesby and Borgan (1996, Lifetime Data Analysis 2, 315–328) propose an omnibus goodness-of-fit test for the Cox proportional hazards model. The test is based on grouping the subjects by their estimated risk score and comparing the number of observed and a model based estimated number of expected events within each group. We show, using extensive simulations, that even for moderate sample sizes the choice of number of groups is critical for the test to attain the specified size. In light of these results we suggest a grouping strategy under which the test attains the correct size even for small samples. The power of the test statistic seems to be acceptable when compared to other goodness-of-fit tests.  相似文献   

8.
Testing between hypotheses, when independent sampling is possible, is a well developed subject. In this paper, we propose hypothesis tests that are applicable when the samples are obtained using Markov chain Monte Carlo. These tests are useful when one is interested in deciding whether the expected value of a certain quantity is above or below a given threshold. We show non-asymptotic error bounds and bounds on the expected number of samples for three types of tests, a fixed sample size test, a sequential test with indifference region, and a sequential test without indifference region. Our tests can lead to significant savings in sample size. We illustrate our results on an example of Bayesian parameter inference involving an ODE model of a biochemical pathway.  相似文献   

9.
The performance of step-wise group screening In terms of the expected number of runs and the expected number of incorrect decisions, Is considered. A method for obtaining optimal step-wise designs Is presented, for the cases in which the direction of each defective factor is assumed to be known a-priori and the observations are subject to error.  相似文献   

10.
In recent years, immunological science has evolved, and cancer vaccines are available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected. Accordingly, the use of weighted log‐rank tests with the Fleming–Harrington class of weights is proposed for evaluation of survival endpoints. We present a method for calculating the sample size under assumption of a piecewise exponential distribution for the cancer vaccine group and an exponential distribution for the placebo group as the survival model. The impact of delayed effect timing on both the choice of the Fleming–Harrington's weights and the increment in the required number of events is discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Some simple test procedures are considered for comparing several group means with a standard value when the data are in a one-way layout. The underlying distributions are assumed to be normal with possibly unequal variances. The tests are based on a union-intersection formulation and can be applied in a form similar to a Shewhart control chart. Both two-sided and one-sided alternatives are considered. The power of the tests can be obtained from tables of a non-central t distribution. Implementation of the tests is illustrated with a numerical example. The tests help identify any group means different from the standard and might lead to a decision about rejecting the null hypothesis before all the group means are observed. The resulting savings in time and resources might be valuable in applications where the number of groups is large and the cost of acquiring data is high. For situations where the normality assumption is untenable, a non-parametric procedure, based on one-sample sign tests is considered.  相似文献   

12.
In this paper three-stage group screening in which group-factors contain differing number of factors is discussed. A procedure for grouping the factors in the absence of concrete prior information is described. Formulas for the expected number of runs and the expected number of incorrect decisions have also been obtained. These formulas are used to formulate criteria for optimal designs.  相似文献   

13.
Two-stage (double sample) tests of hypotheses are presented for testing linear hypotheses in the general linear model. General and one-sided alternatives are considered. Computational techniques for computing critical points are discussed. Tables of critical points are presented. An example suggests that two-stage tests can achieve the same power as a fixed sample size test while reducing considerably the expected number of observations required for the test  相似文献   

14.
Using relative utility curves to evaluate risk prediction   总被引:2,自引:0,他引:2  
Summary.  Because many medical decisions are based on risk prediction models that are constructed from medical history and results of tests, the evaluation of these prediction models is important. This paper makes five contributions to this evaluation: the relative utility curve which gauges the potential for better prediction in terms of utilities, without the need for a reference level for one utility, while providing a sensitivity analysis for misspecification of utilities, the relevant region, which is the set of values of prediction performance that are consistent with the recommended treatment status in the absence of prediction, the test threshold, which is the minimum number of tests that would be traded for a true positive prediction in order for the expected utility to be non-negative, the evaluation of two-stage predictions that reduce test costs and connections between various measures of performance of prediction. An application involving the risk of cardiovascular disease is discussed.  相似文献   

15.
Asymptotic tests for multivariate repeated measures are derived under non-normality and unspecified dependence structure. Notwithstanding their broader scope of application, the methods are particularly useful when a random vector of large number of repeated measurements are collected from each subject but the number of subjects per treatment group is limited. In some experimental situations, replicating the experiment large number of times could be expensive or infeasible. Although taking large number of repeated measurements could be relatively cheaper, due to within subject dependence the number of parameters involved could get large pretty quickly. Under mild conditions on the persistence of the dependence, we have derived asymptotic multivariate tests for the three testing problems in repeated measures analysis. The simulation results provide evidence in favour of the accuracy of the approximations to the null distributions.  相似文献   

16.
Clustered (longitudinal) count data arise in many bio-statistical practices in which a number of repeated count responses are observed on a number of individuals. The repeated observations may also represent counts over time from a number of individuals. One important problem that arises in practice is to test homogeneity within clusters (individuals) and between clusters (individuals). As data within clusters are observations of repeated responses, the count data may be correlated and/or over-dispersed. For over-dispersed count data with unknown over-dispersion parameter we derive two score tests by assuming a random intercept model within the framework of (i) the negative binomial mixed effects model and (ii) the double extended quasi-likelihood mixed effects model (Lee and Nelder, 2001). These two statistics are much simpler than a statistic derived by Jacqmin-Gadda and Commenges (1995) under the framework of the over-dispersed generalized linear model. The first statistic takes the over-dispersion more directly into the model and therefore is expected to do well when the model assumptions are satisfied and the other statistic is expected to be robust. Simulations show superior level property of the statistics derived under the negative binomial and double extended quasi-likelihood model assumptions. A data set is analyzed and a discussion is given.  相似文献   

17.
Goodness-of-fit tests for discrete data and models with parameters to be estimated are usually based on Pearson's χ2 or the Likelihood Ratio Statistic. Both are included in the family of Power-Divergence Statistics SDλ which are asymptotically χ2 distributed for the usual sampling schemes. We derive a limiting standard normal distribution for a standardization Tλ of SDλ under Poisson sampling by considering an approach with an increasing number of cells. In contrast to the χ2 asymptotics we do not require an increase of all expected values and thus meet the situation when data are sparse. Our limit result is useful even if a bootstrap test is used, because it implies that the statistic Tλ should be bootstrapped and not the sum SDλ. The peculiarity of our approach is that the models under test only specify associations. Hence we have to deal with an infinite number of nuisance parameters. We illustrate our approach with an application.  相似文献   

18.
Medical and epidemiological studies often involve groups of subjects associated with increasing levels of exposure to a risk factor. Survival of the groups is expected to follow the same order as the level of exposure. Formal tests for this trend fall into the regression framework if one knows what function of exposure to use as a covariate. When unknown, a linear function of exposure level is often used. Jonckheere-type tests for trend have generated continued interest largely because they do not require specification of a covariate. This paper shows that the Jonckheere-type test statistics are special cases of a generalized linear rank statistic with time-dependent covariates which unfortunately depend on the initial group sizes and censoring distributions. Using asymptotic relative efficiency calculations, the Jonckheere tests are compared to standard linear rank tests based on a linear covariate over a spectrum of shapes for the true trend.  相似文献   

19.
Group-testing procedures for minimizing the expected number of tests needed to classify N units as either good or bad are described, The units are assumed to have come independently from a binomial population with common probability p of being defective and q = 1-p of being good, Special consideration is given to comparing certain halving procedures with the correspending optimal procedures for the problem of finding one defective if it exists, and the problem of finding all the defectives.  相似文献   

20.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号