首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the estimation of a proportion p by group testing (pooled testing), retesting of units within positive groups has received little attention due to the minimal gain in precision compared to testing additional units. If acquisition of additional units is impractical or too expensive, and testing is not destructive, we show that retesting can be a useful option. We propose the retesting of a random grouping of units from positive groups, and compare it with nested halving procedures suggested by others. We develop an estimator of p for our proposed method, and examine its variance properties. Using simulation we compare retesting methods across a range of group testing situations, and show that for most realistic scenarios, our method is more efficient.  相似文献   

2.
Group testing has been long recognized as an efficient method to classify all the experimental units into two mutually exclusive categories: defective or not defective. In recent years, more attention has been brought to the estimation of the population prevalence rate p of a disease, or of some property, using group testing. In this article, we propose two scaled squared-error loss functions, which improve the Bayesian approach to estimating p in terms of minimizing the mean squared error (MSE) of the Bayes estimators of p for small p. We show that the new estimators are preferred over the estimator from the usual squared-error loss function and the maximum likelihood estimator (MLE) for small p.  相似文献   

3.
Group testing is the process of combining individual samples and testing them as a group for the presence of an attribute. The use of such testing to estimate proportions is an important statistical tool in many applications. When samples are collected and tested in groups of different size, complications arise in the construction of exact confidence intervals. In this case, the numbers of positive groups has a multivariate distribution, and the difficulty stems from a lack of a natural ordering of the sample points. Exact two‐sided intervals such as the equal‐tail method based on maximum likelihood estimation, and those based on joint probability or likelihood ratio statistics, have been previously considered. In this paper several new estimators are developed and assessed. We show that the combined tails (or Blaker) method based on a suitable ordering statistic, is the best choice in this setting. The methods are illustrated using a study involving the infection prevalence of Myxobolus cerebralis among free‐ranging fish.  相似文献   

4.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   

5.
Group testing, in which individuals are pooled together and tested as a group, can be combined with inverse sampling to estimate the prevalence of a disease. Alternatives to the MLE are desirable because of its severe bias. We propose an estimator based on the bias correction method of Firth (1993), which is almost unbiased across the range of prevalences consistent with the group testing design. For equal group sizes, this estimator is shown to be equivalent to that derived by applying the correction method of Burrows (1987), and better than existing methods. For unequal group sizes, the problem has some intractable elements, but under some circumstances our proposed estimator can be found, and we show it to be almost unbiased. Calculation of the bias requires computer‐intensive approximation because of the infinite number of possible outcomes.  相似文献   

6.
Nonparametric predictive inference (NPI) is a statistical approach based on few assumptions about probability distributions, with inferences based on data. NPI assumes exchangeability of random quantities, both related to observed data and future observations, and uncertainty is quantified using lower and upper probabilities. In this paper, units from several groups are placed simultaneously on a lifetime experiment and times-to-failure are observed. The experiment may be ended before all units have failed. Depending on the available data and few assumptions, we present lower and upper probabilities for selecting the best group, the subset of best groups and the subset including the best group. We also compare our approach of selecting the best group with some classical precedence selection methods. Throughout, examples are provided to demonstrate our method.  相似文献   

7.
We suggest a new approach to hypothesis testing for ergodic and stationary processes. In contrast to standard methods, the suggested approach gives a possibility to make tests, based on any lossless data compression method even if the distribution law of the codeword lengths is not known. We apply this approach to the following four problems: goodness-of-fit testing (or identity testing), testing for independence, testing of serial independence and homogeneity testing and suggest nonparametric statistical tests for these problems. It is important to note that practically used so-called archivers can be used for suggested testing.  相似文献   

8.
Modelling accelerated life test data by using a Bayesian approach   总被引:1,自引:0,他引:1  
Summary. Because of the high reliability of many modern products, accelerated life tests are becoming widely used to obtain timely information about their time-to-failure distributions. We propose a general class of accelerated life testing models which are motivated by the actual failure process of units from a limited failure population with a positive probability of not failing during the technological lifetime. We demonstrate a Bayesian approach to this problem, using a new class of models with non-monotone hazard rates, the hazard model with potential scope for use far beyond accelerated life testing. Our methods are illustrated with the modelling and analysis of a data set on lifetimes of printed circuit boards under humidity accelerated life testing.  相似文献   

9.
Group testing has been used in many fields of study to estimate proportions. When groups are of different size, the derivation of exact confidence intervals is complicated by the lack of a unique ordering of the event space. An exact interval estimation method is described here, in which outcomes are ordered according to a likelihood ratio statistic. The method is compared with another exact method, in which outcomes are ordered by their associated MLE. Plots of the P‐value against the proportion are useful in examining the properties of the methods. Coverage provided by the intervals is assessed using several realistic grouptesting procedures. The method based on the likelihood ratio, with a mid‐P correction, is shown to give very good coverage in terms of closeness to the nominal level, and is recommended for this type of problem.  相似文献   

10.
王小燕等 《统计研究》2014,31(9):107-112
变量选择是统计建模的重要环节,选择合适的变量可以建立结构简单、预测精准的稳健模型。本文在logistic回归下提出了新的双层变量选择惩罚方法——adaptive Sparse Group Lasso(adSGL),其独特之处在于基于变量的分组结构作筛选,实现了组内和组间双层选择。该方法的优点是对各单个系数和组系数采取不同程度的惩罚,避免了过度惩罚大系数,从而提高了模型的估计和预测精度。求解的难点是惩罚似然函数不是严格凸的,因此本文基于组坐标下降法求解模型,并建立了调整参数的选取准则。模拟分析表明,对比现有代表性方法Sparse Group Lasso、Group Lasso及Lasso,adSGL法不仅提高了双层选择精度,而且降低了模型误差。最后本文将adSGL法应用到信用卡信用评分研究,对比logistic回归,它具有更高的分类精度和稳健性。  相似文献   

11.
ABSTRACT

Inverse binomial sampling is preferred for quick report. It is also recommended when the population proportion is really small to ensure a positive sample is contained. Group testing has been discussed extensively under binomial model, but not so much under negative binomial model. In this study, we investigate the problem of how to determine the group size using inverse binomial group testing. We propose to choose the optimal group size by minimizing asymptotic variance of the estimator or the cost relative to Fisher information. We show the good performance of our estimator by applying to the data of Chlamydia.  相似文献   

12.
In this study, we consider the construction of the confidence interval for the population proportion while using group testing with misclassification. We propose two confidence intervals based on Cornish-Fisher expansion and a modified Wilson’s interval based on a newly developed estimator. We investigate the performance of these intervals extensively and also apply the methods to real datasets. Our newly derived methods have competitive performance compared with existing methods.  相似文献   

13.
In this article, we develop a method for checking the estimation equations, which is for joint estimation of the regression parameters and the overdispersion parameters, based on one dimension projected covariate. This method is different from the general testing methods in that our proposed method can be applied to high-dimensional response while the classical testing methods can not be extended to high dimension problem simply to construct a powerful test. Furthermore, the properties of the test statistics are investigated and Nonparametric Monte Carlo Test (NMCT) is suggested to determine the critical values of the test statistics under null hypothesis.  相似文献   

14.
The step-stress model is a special case of accelerated life testing that allows for testing of units under different levels of stress with changes occurring at various intermediate stages of the experiment. Interest then lies on inference for the mean lifetime at each stress level. All step-stress models discussed so far in the literature are based on a single experiment. For the situation when data have been collected from different experiments wherein all the test units had been exposed to the same levels of stress but with possibly different points of change of stress, we introduce a model that combines the different experiments and facilitates a meta-analysis for the estimation of the mean lifetimes. We then discuss in detail the likelihood inference for the case of simple step-stress experiments under exponentially distributed lifetimes with Type-II censoring.  相似文献   

15.
The paper proposes a Bayesian quantile regression method for hierarchical linear models. Existing approaches of hierarchical linear quantile regression models are scarce and most of them were not from the perspective of Bayesian thoughts, which is important for hierarchical models. In this paper, based on Bayesian theories and Markov Chain Monte Carlo methods, we introduce Asymmetric Laplace distributed errors to simulate joint posterior distributions of population parameters and across-unit parameters and then derive their posterior quantile inferences. We run a simulation as the proposed method to examine the effects on parameters induced by units and quantile levels; the method is also applied to study the relationship between Chinese rural residents' family annual income and their cultivated areas. Both the simulation and real data analysis indicate that the method is effective and accurate.  相似文献   

16.
Group testing has its origin in the identification of syphilis in the U.S. army during World War II. Much of the theoretical framework of group testing was developed starting in the late 1950s, with continued work into the 1990s. Recently, with the advent of new laboratory and genetic technologies, there has been an increasing interest in group testing designs for cost saving purposes. In this article, we compare different nested designs, including Dorfman, Sterrett and an optimal nested procedure obtained through dynamic programming. To elucidate these comparisons, we develop closed-form expressions for the optimal Sterrett procedure and provide a concise review of the prior literature for other commonly used procedures. We consider designs where the prevalence of disease is known as well as investigate the robustness of these procedures, when it is incorrectly assumed. This article provides a technical presentation that will be of interest to researchers as well as from a pedagogical perspective. Supplementary material for this article is available online.  相似文献   

17.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

18.
In situations where individuals are screened for an infectious disease or other binary characteristic and where resources for testing are limited, group testing can offer substantial benefits. Group testing, where subjects are tested in groups (pools) initially, has been successfully applied to problems in blood bank screening, public health, drug discovery, genetics, and many other areas. In these applications, often the goal is to identify each individual as positive or negative using initial group tests and subsequent retests of individuals within positive groups. Many group testing identification procedures have been proposed; however, the vast majority of them fail to incorporate heterogeneity among the individuals being screened. In this paper, we present a new approach to identify positive individuals when covariate information is available on each. This covariate information is used to structure how retesting is implemented within positive groups; therefore, we call this new approach "informative retesting." We derive closed-form expressions and implementation algorithms for the probability mass functions for the number of tests needed to decode positive groups. These informative retesting procedures are illustrated through a number of examples and are applied to chlamydia and gonorrhea testing in Nebraska for the Infertility Prevention Project. Overall, our work shows compelling evidence that informative retesting can dramatically decrease the number of tests while providing accuracy similar to established non-informative retesting procedures.  相似文献   

19.
In the literature a systematic method of obtaining a group testing design is not available at present. Weideman and Raghavarao (1987a, b) gave methods for the construction of non - adaptive hypergeometric group testing designs for identifying at most two defectives by using a dual method. In the present investigation we have developed a method of construction of group testing designs from (i) Hypercubic Designs for t ≡ 3 (mod 6) and (ii) Balanced Incomplete Block Designs for t ≡ 1 (mod 6) and t ≡ 3 (mod 6). These constructions are accomplished by the use of dual designs. The designs so constructed satisfy specified properties and attained an optimal bound as discussed by Weidman and Raghavarao (1987a,b). Here it is also shown that the condition for pairwise disjoint sets of BIBD for t ≡ 1 (mod 6) given by Weideman and Raghavarao (1987b) is not true for all such designs.  相似文献   

20.
Many cancers and neuro‐related diseases display significant phenotypic and genetic heterogeneity across subjects and subpopulations. Characterizing such heterogeneity could transform our understanding of the etiology of these conditions and inspire new approaches to urgently needed prevention, diagnosis, treatment, and prognosis. However, most existing statistical methods face major challenges in delineating such heterogeneity at both the group and individual levels. The aim of this article is to propose a novel statistical disease‐mapping (SDM) framework to address some of these challenges. We develop an efficient estimation method to estimate unknown parameters in SDM and delineate individual and group disease maps. Statistical inference procedures such as hypothesis‐testing problems are also investigated for parameters of interest. Both simulation studies and real data analysis on the ADNI hippocampal surface dataset show that our SDM not only effectively detects diseased regions in each patient but also provides a group disease‐mapping analysis of Alzheimer subgroups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号