首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the estimation of a proportion p by group testing (pooled testing), retesting of units within positive groups has received little attention due to the minimal gain in precision compared to testing additional units. If acquisition of additional units is impractical or too expensive, and testing is not destructive, we show that retesting can be a useful option. We propose the retesting of a random grouping of units from positive groups, and compare it with nested halving procedures suggested by others. We develop an estimator of p for our proposed method, and examine its variance properties. Using simulation we compare retesting methods across a range of group testing situations, and show that for most realistic scenarios, our method is more efficient.  相似文献   

2.
Since Dorfman's seminal work on the subject, group testing has been widely adopted in epidemiological studies. In Dorfman's context of detecting syphilis, group testing entails pooling blood samples and testing the pools, as opposed to testing individual samples. A negative pool indicates all individuals in the pool free of syphilis antigen, whereas a positive pool suggests one or more individuals carry the antigen. With covariate information collected, researchers have considered regression models that allow one to estimate covariate‐adjusted disease probability. We study maximum likelihood estimators of covariate effects in these regression models when the group testing response is prone to error. We show that, when compared with inference drawn from individual testing data, inference based on group testing data can be more resilient to response misclassification in terms of bias and efficiency. We provide valuable guidance on designing the group composition to alleviate adverse effects of misclassification on statistical inference.  相似文献   

3.
There are often situations where two or more regression functions are ordered over a range of covariate values. In this paper, we develop efficient constrained estimation and testing procedures for such models. Specifically, necessary and sufficient conditions for ordering generalized linear regressions are given and shown to unify previous results obtained for simple linear regression, for polynomial regression and in the analysis of covariance models. We show that estimating the parameters of ordered linear regressions requires either quadratic programming or semi‐infinite programming, depending on the shape of the covariate space. A distance‐type test for order is proposed. Simulations demonstrate that the proposed methodology improves the mean square error and power compared with the usual, unconstrained, estimation and testing procedures. Improvements are often substantial. The methodology is extended to order generalized linear models where convex semi‐infinite programming plays a role. The methodology is motivated by, and applied to, a hearing loss study.  相似文献   

4.
Testing procedures for ordered covariate effects are developed in the repeated measures experiment. The maximum likelihood estimators of covariate effects under the ordered hypothesis are approximated by the isotonic regression of their unconstrained estimators. The asymptotic null distributions of the test statistics are chi-bar-square distributions which are mixtures of chi-square distributions. A Monte-Carlo simulation reveals that the proposed test for ordered covariate effects is seriously more powerful than the usual chi-square test that neglects the information on the order restriction. These testing methods are applied for analyzing the effect of vitamin E diet supplement on growth rate of animals.  相似文献   

5.
Two-phase study designs can reduce cost and other practical burdens associated with large scale epidemiologic studies by limiting ascertainment of expensive covariates to a smaller but informative sub-sample (phase-II) of the main study (phase-I). During the analysis of such studies, however, subjects who are selected at phase-I but not at phase-II, remain informative as they may have partial covariate information. A variety of semi-parametric methods now exist for incorporating such data from phase-I subjects when the covariate information can be summarized into a finite number of strata. In this article, we consider extending the pseudo-score approach proposed by Chatterjee et al. (J Am Stat Assoc 98:158–168, 2003) using a kernel smoothing approach to incorporate information on continuous phase-I covariates. Practical issues and algorithms for implementing the methods using existing software are discussed. A sandwich-type variance estimator based on the influence function representation of the pseudo-score function is proposed. Finite sample performance of the methods are studies using simulated data. Advantage of the proposed smoothing approach over alternative methods that use discretized phase-I covariate information is illustrated using two-phase data simulated within the National Wilms Tumor Study (NWTS).  相似文献   

6.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

7.
This paper aims to estimate the false negative fraction of a multiple screening test for bowel cancer, where those who give negative results for six consecutive tests do not have their true disease status verified. A subset of these same individuals is given a further screening test, for the sole purpose of evaluating the accuracy of the primary test. This paper proposes a beta heterogeneity model for the probability of a diseased individual ‘testing positive’ on any single test, and it examines the consequences of this model for inference on the false negative fraction. The method can be generalized to the case where selection for further testing is informative, though this did not appear to be the case for the bowel‐cancer data.  相似文献   

8.
Abstract.  Cox's proportional hazards model is routinely used in many applied fields, some times, however, with too little emphasis on the fit of the model. In this paper, we suggest some new tests for investigating whether or not covariate effects vary with time. These tests are a natural and integrated part of an extended version of the Cox model. An important new feature of the suggested test is that time constancy for a specific covariate is examined in a model, where some effects of other covariates are allowed to vary with time and some are constant; thus making successive testing of time-dependency possible. The proposed techniques are illustrated with the well-known Mayo liver disease data, and a small simulation study investigates the finite sample properties of the tests.  相似文献   

9.
Abstract

When estimating a proportion p by group testing, and it is desired to increase precision, it is sometimes impractical to obtain additional individuals but it is possible to retest groups formed from the individuals within the groups that test positive at the first stage. Hepworth and Watson assessed four methods of retesting, and recommended a random regrouping of individuals from the first stage. They developed an estimator of p for their proposed method, and, because of the analytic complexity, used simulation to examine its variance properties. We now provide an analytical solution to the variance of the estimator, and compare its performance with the earlier simulated results. We show that our solution gives an acceptable approximation in a reasonable range of circumstances.  相似文献   

10.
Abstract.  Several testing procedures are proposed that can detect change-points in the error distribution of non-parametric regression models. Different settings are considered where the change-point either occurs at some time point or at some value of the covariate. Fixed as well as random covariates are considered. Weak convergence of the suggested difference of sequential empirical processes based on non-parametrically estimated residuals to a Gaussian process is proved under the null hypothesis of no change-point. In the case of testing for a change in the error distribution that occurs with increasing time in a model with random covariates the test statistic is asymptotically distribution free and the asymptotic quantiles can be used for the test. This special test statistic can also detect a change in the regression function. In all other cases the asymptotic distribution depends on unknown features of the data-generating process and a bootstrap procedure is proposed in these cases. The small sample performances of the proposed tests are investigated by means of a simulation study and the tests are applied to a data example.  相似文献   

11.
Some simple test procedures are considered for comparing several group means with a standard value when the data are in a one-way layout. The underlying distributions are assumed to be normal with possibly unequal variances. The tests are based on a union-intersection formulation and can be applied in a form similar to a Shewhart control chart. Both two-sided and one-sided alternatives are considered. The power of the tests can be obtained from tables of a non-central t distribution. Implementation of the tests is illustrated with a numerical example. The tests help identify any group means different from the standard and might lead to a decision about rejecting the null hypothesis before all the group means are observed. The resulting savings in time and resources might be valuable in applications where the number of groups is large and the cost of acquiring data is high. For situations where the normality assumption is untenable, a non-parametric procedure, based on one-sample sign tests is considered.  相似文献   

12.
In the evaluation of efficacy of a vaccine to protect against disease caused by finitely many diverse infectious pathogens, it is often important to assess if vaccine protection depends on variations of the exposing pathogen. This problem can be formulated under a competing risks model where the endpoint event is the infection and the cause of failure is the infecting strain type determined after the infection is diagnosed. The strain-specific vaccine efficacy is defined as one minus the cause-specific hazard ratio (vaccine/placebo). This paper develops some simple procedures for testing if the vaccine affords protection against various strains and if and how the strain-specific vaccine efficacy depends on the type of exposing strain, adjusting for covariate effects. The Cox proportional hazards model is used to relate the cause-specific outcomes to explanatory variables. The finite sample properties of proposed tests are studied through simulations and are shown to have good performances. The tests developed are applied to the data collected from an oral cholera vaccine trial.  相似文献   

13.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
We extend four tests common in classical regression – Wald, score, likelihood ratio and F tests – to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.  相似文献   

15.
The generalized semiparametric mixed varying‐coefficient effects model for longitudinal data can accommodate a variety of link functions and flexibly model different types of covariate effects, including time‐constant, time‐varying and covariate‐varying effects. The time‐varying effects are unspecified functions of time and the covariate‐varying effects are nonparametric functions of a possibly time‐dependent exposure variable. A semiparametric estimation procedure is developed that uses local linear smoothing and profile weighted least squares, which requires smoothing in the two different and yet connected domains of time and the time‐dependent exposure variable. The asymptotic properties of the estimators of both nonparametric and parametric effects are investigated. In addition, hypothesis testing procedures are developed to examine the covariate effects. The finite‐sample properties of the proposed estimators and testing procedures are examined through simulations, indicating satisfactory performances. The proposed methods are applied to analyze the AIDS Clinical Trial Group 244 clinical trial to investigate the effects of antiretroviral treatment switching in HIV‐infected patients before and after developing the T215Y antiretroviral drug resistance mutation. The Canadian Journal of Statistics 47: 352–373; 2019 © 2019 Statistical Society of Canada  相似文献   

16.
Murrayand Tsiatis (1996) described a weighted survival estimate thatincorporates prognostic time-dependent covariate informationto increase the efficiency of estimation. We propose a test statisticbased on the statistic of Pepe and Fleming (1989, 1991) thatincorporates these weighted survival estimates. As in Pepe andFleming, the test is an integrated weighted difference of twoestimated survival curves. This test has been shown to be effectiveat detecting survival differences in crossing hazards settingswhere the logrank test performs poorly. This method uses stratifiedlongitudinal covariate information to get more precise estimatesof the underlying survival curves when there is censored informationand this leads to more powerful tests. Another important featureof the test is that it remains valid when informative censoringis captured by the incorporated covariate. In this case, thePepe-Fleming statistic is known to be biased and should not beused. These methods could be useful in clinical trials with heavycensoring that include collection over time of covariates, suchas laboratory measurements, that are prognostic of subsequentsurvival or capture information related to censoring.  相似文献   

17.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

18.
A Wald test-based approach for power and sample size calculations has been presented recently for logistic and Poisson regression models using the asymptotic normal distribution of the maximum likelihood estimator, which is applicable to tests of a single parameter. Unlike the previous procedures involving the use of score and likelihood ratio statistics, there is no simple and direct extension of this approach for tests of more than a single parameter. In this article, we present a method for computing sample size and statistical power employing the discrepancy between the noncentral and central chi-square approximations to the distribution of the Wald statistic with unrestricted and restricted parameter estimates, respectively. The distinguishing features of the proposed approach are the accommodation of tests about multiple parameters, the flexibility of covariate configurations and the generality of overall response levels within the framework of generalized linear models. The general procedure is illustrated with some special situations that have motivated this research. Monte Carlo simulation studies are conducted to assess and compare its accuracy with existing approaches under several model specifications and covariate distributions.  相似文献   

19.
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together, we call these conditions a ‘rejection principle for sequential tests’, which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next, the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and finding the maximum safe dose of a treatment.  相似文献   

20.
Medical and epidemiological studies often involve groups of subjects associated with increasing levels of exposure to a risk factor. Survival of the groups is expected to follow the same order as the level of exposure. Formal tests for this trend fall into the regression framework if one knows what function of exposure to use as a covariate. When unknown, a linear function of exposure level is often used. Jonckheere-type tests for trend have generated continued interest largely because they do not require specification of a covariate. This paper shows that the Jonckheere-type test statistics are special cases of a generalized linear rank statistic with time-dependent covariates which unfortunately depend on the initial group sizes and censoring distributions. Using asymptotic relative efficiency calculations, the Jonckheere tests are compared to standard linear rank tests based on a linear covariate over a spectrum of shapes for the true trend.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号