首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Parallel individual and ecological analyses of data on residential radon have been performed using information on cases of lung cancer and population controls from a recent study in south-west England. For the individual analysis the overall results indicated that the relative risk of lung cancer at 100 Bq m−3 compared with at 0 Bq m−3 was 1.12 (95% confidence interval (0.99, 1.27)) after adjusting for age, sex, smoking, county of residence and social class. In the ecological analysis substantial bias in the estimated effect of radon was present for one of the two counties involved unless an additional variable, urban–rural status, was included in the model, although this variable was not an important confounder in the individual level analysis. Most of the methods that have been recommended for overcoming the limitations of ecological studies would not in practice have proved useful in identifying this variable as an appreciable source of bias.  相似文献   

2.
A two-phase design has been widely used in epidemiological studies of dementia. The first phase assesses a large sample with screening tests. The second, based on the screening test results and possibly on other observed patient's factors, selects a subset of the study sample for a more definitive disease verification assessment. In comparing the accuracies of two screening tests in a two-phase study of dementia, inferences are commonly made from a sample of verified cases. The omission of non-verified cases can seriously bias comparison results. To correct for this bias, we derive the maximum likelihood (ML) estimators for the accuracies of two screening tests and their corresponding correlation. The p -values and confidence intervals are computed using the asymptotic normality of the ML estimators. Our method is used to compare the accuracies of two screening tests in a two-phase epidemiological study of dementia. We found that, although the sensitivities of the new and standard screening tests in detecting a diseased subject are not different, the new screening test performs better in detecting a non-diseased subject.  相似文献   

3.
Reuse of controls in a nested case-control (NCC) study has not been considered feasible since the controls are matched to their respective cases. However, in the last decade or so, methods have been developed that break the matching and allow for analyses where the controls are no longer tied to their cases. These methods can be divided into two groups; weighted partial likelihood (WPL) methods and full maximum likelihood methods. The weights in the WPL can be estimated in different ways and four estimation procedures are discussed. In addition, we address modifications needed to accommodate left truncation. A full likelihood approach is also presented and we suggest an aggregation technique to decrease the computation time. Furthermore, we generalize calibration for case-cohort designs to NCC studies. We consider a competing risks situation and compare WPL, full likelihood and calibration through simulations and analyses on a real data example.  相似文献   

4.
This paper aims to estimate the false negative fraction of a multiple screening test for bowel cancer, where those who give negative results for six consecutive tests do not have their true disease status verified. A subset of these same individuals is given a further screening test, for the sole purpose of evaluating the accuracy of the primary test. This paper proposes a beta heterogeneity model for the probability of a diseased individual ‘testing positive’ on any single test, and it examines the consequences of this model for inference on the false negative fraction. The method can be generalized to the case where selection for further testing is informative, though this did not appear to be the case for the bowel‐cancer data.  相似文献   

5.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   

6.
Length-biased sampling appears in many observational studies, including epidemiological studies, labor economics and cancer screening trials. To accommodate sampling bias, which can lead to substantial estimation bias if ignored, we propose a class of doubly-weighted rank-based estimating equations under the accelerated failure time model. The general weighting structures considered in our estimating equations allow great flexibility and include many existing methods as special cases. Different approaches for constructing estimating equations are investigated, and the estimators are shown to be consistent and asymptotically normal. Moreover, we propose efficient computational procedures to solve the estimating equations and to estimate the variances of the estimators. Simulation studies show that the proposed estimators outperform the existing estimators. Moreover, real data from a dementia study and a Spanish unemployment duration study are analyzed to illustrate the proposed method.  相似文献   

7.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   

8.
Statistics in epidemiology: the case-control study   总被引:1,自引:0,他引:1  
This article presents a general review of the major trends in the conceptualization, development, and success of case-control methods for the study of disease causation and prevention. "Recent work on nested case-control, case-cohort, and two-stage case control designs demonstrates the continuing impact of statistical thinking on epidemiology. The influence of R. A. Fisher's work on these developments is mentioned wherever possible. His objections to the drawing of causal conclusions from observational data on cigarette smoking and lung cancer are used to introduce the problems of measurement error and confounding bias."  相似文献   

9.
We examine the rationale of prospective logistic regression analysis for pair-matched case-control data using explicit, parametric terms for matching variables in the model. We show that this approach can yield inconsistent estimates for the disease-exposure odds ratio, even in large samples. Some special conditions are given under which the bias for the disease-exposure odds ratio is small. It is because these conditions are not too uncommon that this flawed analytic method appears to possess an (unreasonable) effectiveness.  相似文献   

10.
In this paper, we consider a mixture of two uniform distributions and derive L-moment estimators of its parameters. Three possible ways of mixing two uniforms, namely with neither overlap nor gap, with overlap, and with gap, are studied. The performance of these L-moment estimators in terms of bias and efficiency is compared to that obtained by means of the conventional method of moments (MM), modified maximum likelihood (MML) method and the usual maximum likelihood (ML) method. These intensive simulations reveal that MML estimators are the best in most of the cases, and the L-moment estimators are less subject to bias in estimation for some mixtures and more efficient in most of the cases than the conventional MM estimators. The L-moment estimators are, in some cases, more efficient than the ML and MML estimators.  相似文献   

11.
A common problem in environmental epidemiology is the estimation and mapping of spatial variation in disease risk. In this paper we analyse data from the Walsall District Health Authority, UK, concerning the spatial distributions of cancer cases compared with controls sampled from the population register. We formulate the risk estimation problem as a nonparametric binary regression problem and consider two different methods of estimation. The first uses a standard kernel method with a cross-validation criterion for choosing the associated bandwidth parameter. The second uses the framework of the generalized additive model (GAM) which has the advantage that it can allow for additional explanatory variables, but is computationally more demanding. For the Walsall data, we obtain similar results using either the kernel method with controls stratified by age and sex to match the age–sex distribution of the cases or the GAM method with random controls but incorporating age and sex as additional explanatory variables. For cancers of the lung or stomach, the analysis shows highly statistically significant spatial variation in risk. For the less common cancers of the pancreas, the spatial variation in risk is not statistically significant.  相似文献   

12.
We consider methods for analysing matched case–control data when some covariates ( W ) are completely observed but other covariates ( X ) are missing for some subjects. In matched case–control studies, the complete-record analysis discards completely observed subjects if none of their matching cases or controls are completely observed. We investigate an imputation estimate obtained by solving a joint estimating equation for log-odds ratios of disease and parameters in an imputation model. Imputation estimates for coefficients of W are shown to have smaller bias and mean-square error than do estimates from the complete-record analysis.  相似文献   

13.
Case–control studies allow efficient estimation of the associations of covariates with a binary response in settings where the probability of a positive response is small. It is well known that covariate–response associations can be consistently estimated using a logistic model by acting as if the case–control (retrospective) data were prospective, and that this result does not hold for other binary regression models. However, in practice an investigator may be interested in fitting a non–logistic link binary regression model and this paper examines the magnitude of the bias resulting from ignoring the case–control sample design with such models. The paper presents an approximation to the magnitude of this bias in terms of the sampling rates of cases and controls, as well as simulation results that show that the bias can be substantial.  相似文献   

14.
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation‐based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation‐based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias‐correction method of imputation‐based AUCs and found that the bias‐corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation‐based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
In practice, we often need to identify individuals whose longitudinal behaviour is different from the behaviour of those well-functioning individuals, so that some unpleasant consequences (e.g. stroke) can be avoided or early detected. To handle such applications, a new statistical method, called dynamic screening system, has been developed in the literature. A recent version of this method can analyze correlated data. However, the computation involved is intensive. In this paper, we suggest a fast computing algorithm for the dynamic screening system. The algorithm can improve the effectiveness of the conventional dynamic screening system in certain cases. Numerical results show that the new algorithm works well in different cases.  相似文献   

16.
Randomized clinical trials are designed to estimate the direct effect of a treatment by randomly assigning patients to receive either treatment or control. However, in some trials, patients who discontinued their initial randomized treatment are allowed to switch to another treatment. Therefore, the direct treatment effect of interest may be confounded by subsequent treatment. Moreover, the decision on whether to initiate a second‐line treatment is typically made based on time‐dependent factors that may be affected by prior treatment history. Due to these time‐dependent confounders, traditional time‐dependent Cox models may produce biased estimators of the direct treatment effect. Marginal structural models (MSMs) have been applied to estimate causal treatment effects even in the presence of time‐dependent confounders. However, the occurrence of extremely large weights can inflate the variance of the MSM estimators. In this article, we proposed a new method for estimating weights in MSMs by adaptively truncating the longitudinal inverse probabilities. This method provides balance in the bias variance trade‐off when large weights are inevitable, without the ad hoc removal of selected observations. We conducted simulation studies to explore the performance of different methods by comparing bias, standard deviation, confidence interval coverage rates, and mean square error under various scenarios. We also applied these methods to a randomized, open‐label, phase III study of patients with nonsquamous non‐small cell lung cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Using some logarithmic and integral transformation we transform a continuous covariate frailty model into a polynomial regression model with a random effect. The responses of this mixed model can be ‘estimated’ via conditional hazard function estimation. The random error in this model does not have zero mean and its variance is not constant along the covariate and, consequently, these two quantities have to be estimated. Since the asymptotic expression for the bias is complicated, the two-large-bandwidth trick is proposed to estimate the bias. The proposed transformation is very useful for clustered incomplete data subject to left truncation and right censoring (and for complex clustered data in general). Indeed, in this case no standard software is available to fit the frailty model, whereas for the transformed model standard software for mixed models can be used for estimating the unknown parameters in the original frailty model. A small simulation study illustrates the good behavior of the proposed method. This method is applied to a bladder cancer data set.  相似文献   

18.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

19.
The objective of this paper is to describe methods for estimating current incidence rates for human immunodeficiency virus (HIV) that account for follow-up bias. Follow-up bias arises when the incidence rate among individuals in a cohort who return for follow-up is different from the incidence rate among those who do not return. The methods are based on the use of early markers of HIV infection such as p24 antigen. The first method, called the cross-sectional method, uses only data collected at an initial base-line visit. The method does not require follow-up data but does require a priori knowledge of the mean duration of the marker (μ). A confidence interval procedure is developed that accounts for uncertainty in μ. The second method combines the base-line data from all individuals together with follow-up data from those individuals who return for follow-up. This method has the distinct advantage of not requiring prior information about μ. Several confidence interval procedures for the incidence rate are compared by simulation. The methods are applied to a study in India to estimate current HIV incidence. These data suggest that the epidemic is growing rapidly in some subpopulations in India.  相似文献   

20.
Discretizing continuous distributions can lead to bias in parameter estimates. We present a case study from educational testing that illustrates dramatic consequences of discreteness when discretizing partitions differ across distributions. The percentage of test takers who score above a certain cutoff score (percent above cutoff, or “PAC”) often describes overall performance on a test. Year-over-year changes in PAC, or ΔPAC, have gained prominence under recent U.S. education policies, with public schools facing sanctions if they fail to meet PAC targets. In this article, we describe how test score distributions act as continuous distributions that are discretized inconsistently over time. We show that this can propagate considerable bias to PAC trends, where positive ΔPACs appear negative, and vice versa, for a substantial number of actual tests. A simple model shows that this bias applies to any comparison of PAC statistics in which values for one distribution are discretized differently from values for the other.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号