首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The testing of combined bacteriological samples – or “group testing” – was introduced to reduce the cost of identifying defective individuals in populations containing small proportions of defectives. It may also be applied to plants, animals, or food samples to estimate proportions infected, or to accept or reject populations. Given the proportion defective in the population, the number of positive combined samples is approximately binomial when the population is large: we find the exact distribution when groups include the same number of samples. We derive some properties of this distribution, and consider maximum-likelihood and Bayesian estimation of the number defective.  相似文献   

2.
In the estimation of a proportion p by group testing (pooled testing), retesting of units within positive groups has received little attention due to the minimal gain in precision compared to testing additional units. If acquisition of additional units is impractical or too expensive, and testing is not destructive, we show that retesting can be a useful option. We propose the retesting of a random grouping of units from positive groups, and compare it with nested halving procedures suggested by others. We develop an estimator of p for our proposed method, and examine its variance properties. Using simulation we compare retesting methods across a range of group testing situations, and show that for most realistic scenarios, our method is more efficient.  相似文献   

3.
Summary.  We present an application of reversible jump Markov chain Monte Carlo sampling from the field of neurophysiology where we seek to estimate the number of motor units within a single muscle. Such an estimate is needed for monitoring the progression of neuromuscular diseases such as amyotrophic lateral sclerosis. Our data consist of action potentials that were recorded from the surface of a muscle in response to stimuli of different intensities applied to the nerve supplying the muscle. During the gradual increase in intensity of the stimulus from the threshold to supramaximal, all motor units are progressively excited. However, at any given submaximal intensity of stimulus, the number of units that are excited is variable, because of random fluctuations in axonal excitability. Furthermore, the individual motor unit action potentials exhibit variability. To account for these biological properties, Ridall and co-workers developed a model of motor unit activation that is capable of describing the response where the number of motor units, N , is fixed. The purpose of this paper is to extend that model so that the possible number of motor units, N , is a stochastic variable. We illustrate the elements of our model, show that the results are reproducible and show that our model can measure the decline in motor unit numbers during the course of amyotrophic lateral sclerosis. Our method holds promise of being useful in the study of neurogenic diseases.  相似文献   

4.
Group-testing procedures for minimizing the expected number of tests needed to classify N units as either good or bad are described, The units are assumed to have come independently from a binomial population with common probability p of being defective and q = 1-p of being good, Special consideration is given to comparing certain halving procedures with the correspending optimal procedures for the problem of finding one defective if it exists, and the problem of finding all the defectives.  相似文献   

5.
Abstract.  Controlling the false discovery rate (FDR) is a powerful approach to multiple testing, with procedures developed with applications in many areas. Dependence among the test statistics is a common problem, and many attempts have been made to extend the procedures. In this paper, we show that a certain degree of dependence is allowed among the test statistics, when the number of tests is large, with no need for any correction. We then suggest a way to conservatively estimate the proportion of false nulls, both under dependence and independence, and discuss the advantages of using such estimators when controlling the FDR.  相似文献   

6.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

7.
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.  相似文献   

8.
The np control chart is used widely in Statistical Process Control (SPC) for attributes. It is difficult to design an np chart that simultaneously satisfies a requirement on false alarm rate and has high detection effectiveness. This is mainly because one is often unable to make the in-control Average Run Length ARL0 of an np chart close to a specified or desired value. This article proposes a new np control chart which is able to overcome the problems suffered by the conventional np chart. It is called the Double Inspection (DI) np chart, because it uses a double inspection scheme to decide the process status (in control or out of control). The first inspection decides the process status according to the number of non-conforming units found in a sample; and the second inspection makes a decision based on the location of a particular non-conforming unit in the sample. The double inspection scheme makes the in-control ARL0 very close to a specified value and the out-of-control Average Run Length ARL1 quite small. As a result, the requirement on a false alarm rate is satisfied and the detection effectiveness also achieves a high level. Moreover, the DI np chart retains the operational simplicity of the np chart to a large degree and achieves the performance improvement without requiring extra inspection (testing whether a unit is conforming or not).  相似文献   

9.
Summary.  The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.  相似文献   

10.
In sampling inspection by variables, an item is considered defective if its quality characteristic Y falls below some specification limit L0. We consider switching to a new supplier if we can be sure that the proportion of defective items for the new supplier is smaller than the proportion defective for the present supplier.

Assume that Y has a normal distribution. A test for comparing these proportions is developed. A simulation study of the performance of the test is presented.  相似文献   

11.
The effects of inspection error on a two-stage procedure for identification of defective units is studied. The first stage is intended to provide the number of defective units in a group of n units; the second stage consists of individual inspection until the status of all units is (apparently) established  相似文献   

12.
This paper investigates the test procedures for testing the homogeneity of the proportions in the analysis of clustered binary data in the context of unequal dispersions across the treatment groups. We introduce a simple test procedure based on adjusted proportions using a sandwich estimator of the variance of the proportion estimators obtained by the generalized estimating equations approach of Zeger and Liang (1986) [Biometrics 42, 121-130]. We also extend the exiting test procedures of testing the hypothesis of proportions in this context. These test procedures are then compared, by simulations, in terms of size and power. Moreover, we derive the score test for testing the homogeneity of the dispersion parameters among several groups of clustered binary data. An illustrative application of the recommended test procedures is also presented.  相似文献   

13.
In sample surveys sometimes one encounters a situation where, for many sampling units, one or more variables of interest are valued zero or negligibly low while for some other units they are substantial because of heavy localization of the high-valued units in certain segments. Estimation may then be inaccurate if a chosen sample fails to capture enough of the high-valued units. In such situations, adaptive sampling, as an extension of the initial sample to capture additional high-valued units, may be more serviceable. However, the size of an adaptive sample may often far exceed that of the initial sample. In this paper we present a method to put desirable constraints on the adaptive sample-size to keep the latter in check. To examine the efficacy of this method, we illustrate its application to estimate total numbers of rural earners through specific vocations in a given district in India simultaneously for several vocations.  相似文献   

14.
Tests for unit roots in panel data have become very popular. Two attractive features of panel data unit root tests are the increased power compared to time-series tests, and the often well-behaved limiting distributions of the tests. In this paper we apply Monte Carlo simulations to investigate how well the normal approximation works for a heterogeneous panel data unit root test when there are only a few cross sections in the sample. We find that the normal approximation, which should be valid for large numbers of cross-sectional units, works well, at conventional significance levels, even when the number of cross sections is as small as two. This finding is valuable for the applied researcher since critical values will be easy to obtain and p-values will be readily available.  相似文献   

15.
In this paper, we translate variable selection for linear regression into multiple testing, and select significant variables according to testing result. New variable selection procedures are proposed based on the optimal discovery procedure (ODP) in multiple testing. Due to ODP’s optimality, if we guarantee the number of significant variables included, it will include less non significant variables than marginal p-value based methods. Consistency of our procedures is obtained in theory and simulation. Simulation results suggest that procedures based on multiple testing have improvement over procedures based on selection criteria, and our new procedures have better performance than marginal p-value based procedures.  相似文献   

16.
If the population of interest in a line transect survey consists of distinct sub-populations, or classes, line transect data can be used to estimate the proportion of the population in each class, or the ratio of the number of individuals in one class to another class. However, if the various classes have unequal overall detectability then the observed ratio or proportion is biased and should be adjusted. Estimators for the desired proportions and ratios are proposed and studied, and procedures for examining the equal detectability hypotheses are discussed.  相似文献   

17.
Group testing is a method of pooling a number of units together and performing a single test on the resulting group. Group testing is an appealing option when few individual units are thought to be infected and the cost of the testing is non-negligible. Overdispersion is the phenomenon of having greater variability than predicted by the random component of the model; this is common in the modeling of binomial distribution for group testing. The purpose of this paper is to provide a comparison of several established methods of constructing confidence intervals after adjusting for overdispersion. We evaluate and investigate each method in six different cases of group testing. A method based on the score statistic with correction for skewness is recommended. We illustrate the methods using two data sets, one from the detection of seed transmission and the other from serological testing for malaria.  相似文献   

18.
The classification between stochastic trend stationarity and deterministic broken trend stationarity is important because incorrect inferences can follow if a stationary series with a broken trend is incorrectly classified as integrated. In this paper, we consider joint tests for regular and seasonal unit roots null hypothesis against broken trend stationarity alternatives where the location of the break is known or unknown. Based on the F-test proposed by Hasza and Fuller (1982, Ann. Statist. 10, 1209–1216), we develop testing procedures for distinguishing these two types of process. The asymptotic distributions of test statistics are derived as functions of Wiener processes. A response surface regression analysis directed to relating the finite sample distributions and the breaking position is studied. Simulation experiments suggest that the power of the test is reasonable. The testing procedure is illustrated by the Canadian consumer price index series.  相似文献   

19.
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1?λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.  相似文献   

20.
Group testing, in which individuals are pooled together and tested as a group, can be combined with inverse sampling to estimate the prevalence of a disease. Alternatives to the MLE are desirable because of its severe bias. We propose an estimator based on the bias correction method of Firth (1993), which is almost unbiased across the range of prevalences consistent with the group testing design. For equal group sizes, this estimator is shown to be equivalent to that derived by applying the correction method of Burrows (1987), and better than existing methods. For unequal group sizes, the problem has some intractable elements, but under some circumstances our proposed estimator can be found, and we show it to be almost unbiased. Calculation of the bias requires computer‐intensive approximation because of the infinite number of possible outcomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号