首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For normally distributed data analyzed with linear models, it is well known that measurement error on an independent variable leads to attenuation of the effect of the independent variable on the dependent variable. However, for time‐to‐event variables such as progression‐free survival (PFS), the effect of the measurement variability in the underlying measurements defining the event is less well understood. We conducted a simulation study to evaluate the impact of measurement variability in tumor assessment on the treatment effect hazard ratio for PFS and on the median PFS time, for different tumor assessment frequencies. Our results show that scan measurement variability can cause attenuation of the treatment effect (i.e. the hazard ratio is closer to one) and that the extent of attenuation may be increased with more frequent scan assessments. This attenuation leads to inflation of the type II error. Therefore, scan measurement variability should be minimized as far as possible in order to reveal a treatment effect that is closest to the truth. In disease settings where the measurement variability is shown to be large, consideration may be given to inflating the sample size of the study to maintain statistical power. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Random error in a continuous outcome variable does not affect its regression on a predictor. However, when a continuous outcome variable is dichotomised, random measurement error results in a flatter exposure-response relationship with a higher intercept. Although this consequence is similar to the effect of misclassification in a binary outcome variable, it cannot be corrected using techniques appropriate for binary data. Conditional distributions of the measurements of the continuous outcome variable can be corrected if the reliability coefficient of the measurements can be estimated. An unbiased estimate of the exposure-response relationship is then easily calculated. This procedure is demonstrated using data on the relationship between smoking and the development of airway obstruction.  相似文献   

3.
We consider measurement error models within the time series unobserved component framework. A variable of interest is observed with some measurement error and modelled as an unobserved component. The forecast and the prediction of this variable given the observed values is given by the Kalman filter and smoother along with their conditional variances. By expressing the forecasts and predictions as weighted averages of the observed values, we investigate the effect of estimation error in the measurement and observation noise variances. We also develop corrected standard errors for prediction and forecasting accounting for the fact that the measurement and observation error variances are estimated by the same sample that is used for forecasting and prediction purposes. We apply the theory to the Yellowstone grizzly bears and US index of production datasets.  相似文献   

4.
Generalized linear models (GLMs) with error-in-covariates are useful in epidemiological research due to the ubiquity of non-normal response variables and inaccurate measurements. The link function in GLMs is chosen by the user depending on the type of response variable, frequently the canonical link function. When covariates are measured with error, incorrect inference can be made, compounded by incorrect choice of link function. In this article we propose three flexible approaches for handling error-in-covariates and estimating an unknown link simultaneously. The first approach uses a fully Bayesian (FB) hierarchical framework, treating the unobserved covariate as a latent variable to be integrated over. The second and third are approximate Bayesian approach which use a Laplace approximation to marginalize the variables measured with error out of the likelihood. Our simulation results show support that the FB approach is often a better choice than the approximate Bayesian approaches for adjusting for measurement error, particularly when the measurement error distribution is misspecified. These approaches are demonstrated on an application with binary response.  相似文献   

5.
Measurement error is a commonly addressed problem in psychometrics and the behavioral sciences, particularly where gold standard data either does not exist or are too expensive. The Bayesian approach can be utilized to adjust for the bias that results from measurement error in tests. Bayesian methods offer other practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. In this paper we consider a logistic regression model where both the response and a binary covariate are subject to misclassification. We assume both a continuous measure and a binary diagnostic test are available for the response variable but no gold standard test is assumed available. We consider a fully Bayesian analysis that affords such adjustments, accounting for the sources of error and correcting estimates of the regression parameters. Based on the results from our example and simulations, the models that account for misclassification produce more statistically significant results, than the models that ignore misclassification. A real data example on math disorders is considered.  相似文献   

6.
We present a simulation study and application that shows inclusion of binary proxy variables related to binary unmeasured confounders improves the estimate of a related treatment effect in binary logistic regression. The simulation study included 60,000 randomly generated parameter scenarios of sample size 10,000 across six different simulation structures. We assessed bias by comparing the probability of finding the expected treatment effect relative to the modeled treatment effect with and without the proxy variable. Inclusion of a proxy variable in the logistic regression model significantly reduced the bias of the treatment or exposure effect when compared to logistic regression without the proxy variable. Including proxy variables in the logistic regression model improves the estimation of the treatment effect at weak, moderate, and strong association with unmeasured confounders and the outcome, treatment, or proxy variables. Comparative advantages held for weakly and strongly collapsible situations, as the number of unmeasured confounders increased, and as the number of proxy variables adjusted for increased.  相似文献   

7.
We consider data with a nominal grouping variable and a binary response variable. The grouping variable is measured without error, but the response variable is measured using a fallible device subject to misclassification. To achieve model identifiability, we use the double-sampling scheme which requires obtaining a subsample of the original data or another independent sample. This sample is then classified by both the fallible device and another infallible device regarding the response variable. We propose two Wald tests for testing the association between the two variables and illustrate the test using traffic data. The Type-I error rate and power of the tests are examined using simulations and a modified Wald test is recommended.  相似文献   

8.
Often in longitudinal data arising out of epidemiologic studies, measurement error in covariates and/or classification errors in binary responses may be present. The goal of the present work is to develop a random effects logistic regression model that corrects for the classification errors in binary responses and/or measurement error in covariates. The analysis is carried out under a Bayesian set up. Simulation study reveals the effect of ignoring measurement error and/or classification errors on the estimates of the regression coefficients.  相似文献   

9.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error.  相似文献   

10.
A predictor variable or dose that is measured with substantial error may possess an error-free milestone, such that it is known with negligible error whether the value of the variable is to the left or right of the milestone. Such a milestone provides a basis for estimating a linear relationship between the true but unknown value of the error-free predictor and an outcome, because the milestone creates a strong and valid instrumental variable. The inferences are nonparametric and robust, and in the simplest cases, they are exact and distribution free. We also consider multiple milestones for a single predictor and milestones for several predictors whose partial slopes are estimated simultaneously. Examples are drawn from the Wisconsin Longitudinal Study, in which a BA degree acts as a milestone for sixteen years of education, and the binary indicator of military service acts as a milestone for years of service.  相似文献   

11.
《统计学通讯:理论与方法》2012,41(16-17):3150-3161
We consider a new approach to deal with non ignorable non response on an outcome variable, in a causal inference framework. Assuming that a binary instrumental variable for non response is available, we provide a likelihood-based approach to identify and estimate heterogeneous causal effects of a binary treatment on specific latent subgroups of units, named principal strata, defined by the non response behavior under each level of the treatment and of the instrument. We show that, within each stratum, non response is ignorable and respondents can be properly compared by treatment status. In order to assess our method and its robustness when the usually invoked assumptions are relaxed or misspecified, we simulate data to resemble a real experiment conducted on a panel survey which compares different methods of reducing panel attrition.  相似文献   

12.
Summary.  In a linear model, the effect of a continuous explanatory variable may vary across groups defined by a categorical variable, and the variable itself may be subject to measurement error. This suggests a linear measurement error model with slope-by-factor interactions. The variables that are defined by such interactions are neither continuous nor discrete, and hence it is not immediately clear how to fit linear measurement error models when interactions are present. This paper gives a corollary of a theorem of Fuller for the situation of correcting measurement errors in a linear model with slope-by-factor interactions. In particular, the error-corrected estimate of the coefficients and its asymptotic variance matrix are given in a more easily assessable form. Simulation results confirm the asymptotic normality of the coefficients in finite sample cases. We apply the results to data from the Seychelles Child Development Study at age 66 months, assessing the effects of exposure to mercury through consumption of fish on child development for females and males for both prenatal and post-natal exposure.  相似文献   

13.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

14.
Researchers are increasingly using the standardized difference to compare the distribution of baseline covariates between treatment groups in observational studies. Standardized differences were initially developed in the context of comparing the mean of continuous variables between two groups. However, in medical research, many baseline covariates are dichotomous. In this article, we explore the utility and interpretation of the standardized difference for comparing the prevalence of dichotomous variables between two groups. We examined the relationship between the standardized difference, and the maximal difference in the prevalence of the binary variable between two groups, the relative risk relating the prevalence of the binary variable in one group compared to the prevalence in the other group, and the phi coefficient for measuring correlation between the treatment group and the binary variable. We found that a standardized difference of 10% (or 0.1) is equivalent to having a phi coefficient of 0.05 (indicating negligible correlation) for the correlation between treatment group and the binary variable.  相似文献   

15.
Summary.  When experimentation on a real system is expensive, data are often collected by using cheaper, lower fidelity surrogate systems. The paper concerns response surface methods in the context of variable fidelity experimentation. We propose the use of generalized least squares to generate the predictions. We also present perhaps the first optimal designs for variable fidelity experimentation, using an extension of the expected integrated mean-squared error criterion. Numerical tests are used to compare the performance of the method with alternatives and to investigate the robustness to incorporated assumptions. The method is applied to automotive engine valve heat treatment process design in which real world data were mixed with data from two types of computer simulation.  相似文献   

16.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   

17.
Combining patient-level data from clinical trials can connect rare phenomena with clinical endpoints, but statistical techniques applied to a single trial may become problematical when trials are pooled. Estimating the hazard of a binary variable unevenly distributed across trials showcases a common pooled database issue. We studied how an unevenly distributed binary variable can compromise the integrity of fixed and random effects Cox proportional hazards (cph) models. We compared fixed effect and random effects cph models on a set of simulated datasets inspired by a 17-trial pooled database of patients presenting with ST segment elevation myocardial infarction (STEMI) and non-STEMI undergoing percutaneous coronary intervention. An unevenly distributed covariate can bias hazard ratio estimates, inflate standard errors, raise type I error, and reduce power. While uneveness causes problems for all cph models, random effects suffer least. Compared to fixed effect models, random effects suffer lower bias and trade inflated type I errors for improved power. Contrasting hazard rates between trials prevent accurate estimates from both fixed and random effects models.  相似文献   

18.
The measurement error model (MEM) is an important model in statistics because in a regression problem, the measurement error of the explanatory variable will seriously affect the statistical inferences if measurement errors are ignored. In this paper, we revisit the MEM when both the response and explanatory variables are further involved with rounding errors. Additionally, the use of a normal mixture distribution to increase the robustness of model misspecification for the distribution of the explanatory variables in measurement error regression is in line with recent developments. This paper proposes a new method for estimating the model parameters. It can be proved that the estimates obtained by the new method possess the properties of consistency and asymptotic normality.  相似文献   

19.
In this paper we consider the impact of both missing data and measurement errors on a longitudinal analysis of participation in higher education in Australia. We develop a general method for handling both discrete and continuous measurement errors that also allows for the incorporation of missing values and random effects in both binary and continuous response multilevel models. Measurement errors are allowed to be mutually dependent and their distribution may depend on further covariates. We show that our methodology works via two simple simulation studies. We then consider the impact of our measurement error assumptions on the analysis of the real data set.  相似文献   

20.
ABSTRACT

The measurement error model with replicated data on study as well as explanatory variables is considered. The measurement error variance associated with the explanatory variable is estimated using the complete data and the grouped data which is used for the construction of the consistent estimators of regression coefficient. These estimators are further used in constructing an almost unbiased estimator of regression coefficient. The large sample properties of these estimators are derived without assuming any distributional form of the measurement errors and the random error component under the setup of an ultrastructural model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号