首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 15 毫秒
1.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error.  相似文献   

2.
The revised ICH E14 Question and Answer (R3) document issued in December 2015 enables pharmaceutical companies to use concentration‐QTc (C‐QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. A new approach by including the time effect into the current C‐QTc model is introduced. Through a simulation study, we evaluated performances of different C‐QTc modeling with different dependent variables, covariates, and covariance structures. This simulation study shows that C‐QTc models with ΔQTc being dependent variable without time effect inflate false negative rate and that fitting C‐QTc models with different dependent variables, covariates, and covariance structures impacts the control of false negative and false positive rates. Appropriate C‐QTc modeling strategies with good control of false negative rate and false positive rate are recommended.  相似文献   

3.
Measurement error is well known to cause bias in estimated regression coefficients and a loss of power for detecting associations. Methods commonly used to correct for bias often require auxiliary data. We develop a solution for investigating associations between the change in an imprecisely measured outcome and precisely measured predictors, adjusting for the baseline value of the outcome when auxiliary data are not available. We require the specification of ranges for the reliability or the measurement error variance. The solution allows one to investigate the associations for change and to assess the impact of the measurement error.  相似文献   

4.
Because of the recent regulatory emphasis on issues related to drug‐induced cardiac repolarization that can potentially lead to sudden death, QT interval analysis has received much attention in the clinical trial literature. The analysis of QT data is complicated by the fact that the QT interval is correlated with heart rate and other prognostic factors. Several attempts have been made in the literature to derive an optimal method for correcting the QT interval for heart rate; however the QT correction formulae obtained are not universal because of substantial variability observed across different patient populations. It is demonstrated in this paper that the widely used fixed QT correction formulae do not provide an adequate fit to QT and RR data and bias estimates of treatment effect. It is also shown that QT correction formulae derived from baseline data in clinical trials are likely to lead to Type I error rate inflation. This paper develops a QT interval analysis framework based on repeated‐measures models accomodating the correlation between QT interval and heart rate and the correlation among QT measurements collected over time. The proposed method of QT analysis controls the Type I error rate and is at least as powerful as traditional QT correction methods with respect to detecting drug‐related QT interval prolongation. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
In longitudinal studies, missing responses and mismeasured covariates are commonly seen due to the data collection process. Without cautiousness in data analysis, inferences from the standard statistical approaches may lead to wrong conclusions. In order to improve the estimation for longitudinal data analysis, a doubly robust estimation method for partially linear models, which can simultaneously account for the missing responses and mismeasured covariates, is proposed. Imprecisions of covariates are corrected by taking advantage of the independence between replicate measurement errors, and missing responses are handled by the doubly robust estimation under the mechanism of missing at random. The asymptotic properties of the proposed estimators are established under regularity conditions, and simulation studies demonstrate desired properties. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study.  相似文献   

6.
The presence of measurement error may cause bias in parameter estimation and can lead to incorrect conclusions in data analyses. Despite a large body of literature on general measurement error problems, relatively few works exist to handle Poisson models. In this article we thoroughly study Poisson models with errors in covariates and propose consistent and locally efficient semiparametric estimators. We assess the finite sample performance of the estimators through extensive simulation studies and illustrate the proposed methodologies by analyzing data from the Stroke Recovery in Underserved Populations Study. The Canadian Journal of Statistics 47: 157–181; 2019 © 2019 Statistical Society of Canada  相似文献   

7.
In the literature, there are many results on the consequences of mis-specified models for linear models with error in the response only, see, e.g., Seber(1977). There are also discussions of estimation for the model writh errors both in the response and in the predictor variables (called measurement error models; see, e.g., Fuller(1987)). In this paper, we consider the problem of model mis-specification for measurement error models. Only a few special cases have been tackled in the past (Edland, 1996; Carroll and Ruppert, 1996 and Lakshminarayanan Amp; Gunst, 1984); we deal with the situation here in some generality. Results have been obtained as follows: (a) When a model is under-fitted, the estimate of the variance of the measurement error will be asymptotically biased, as will the regression coefficients, and the asymptotic biases in the estimates of the regression coefficients will always exist for under-fitted models. Even orthogonality of the variables in the model will not make the biases vanish. (b)For over-fitting, the estimates of the variances of measurement errors and of the regression coefficients are asymptotically unbiased. However, the variance of the estimated regression coefficients will increase. Over-fitting will cause larger changes in the variances of the estimated parameters in measurement error models than in no measurement error models.  相似文献   

8.
9.
A complication that may arise in some bioequivalence studies is that of ‘incomplete subject profiles’, caused by missing values that occur at one or more sampling points in the concentration–time curve for some study subjects. We assess the impact of incomplete subject profiles on the assessment of bioequivalence in a standard two‐period crossover design. The specific aim of the investigation is to assess the impact of four different patterns of missing concentration values on the coverage level of a 90% nominal two‐sided confidence interval for the ratio of geometric means and then to consider the impact on the probability of concluding bioequivalence. An overall conclusion from the results is that random missingness – that is, missingness for reasons unrelated to the bioavailability of the formulation involved or, more generally, to any aspect of the study design and conduct – has a damaging effect on the study conclusions only when the number of missing values is fairly large. On the other hand, a missingness pattern that potentially has a very damaging effect on the study conclusions is that which arises when values are missing ‘late’ in the concentration–time curve. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

10.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   

11.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
We study the benefit of exploiting the gene–environment independence (GEI) assumption for inferring the joint effect of genotype and environmental exposure on disease risk in a case–control study. By transforming the problem into a constrained maximum likelihood estimation problem we derive the asymptotic distribution of the maximum likelihood estimator (MLE) under the GEI assumption (MLE‐GEI) in a closed form. Our approach uncovers a transparent explanation of the efficiency gained by exploiting the GEI assumption in more general settings, thus bridging an important gap in the existing literature. Moreover, we propose an easy‐to‐implement numerical algorithm for estimating the model parameters in practice. Finally, we conduct simulation studies to compare the proposed method with the traditional prospective logistic regression method and the case‐only estimator. The Canadian Journal of Statistics 47: 473–486; 2019 © 2019 Statistical Society of Canada  相似文献   

13.
This paper is motivated by our attempt to answer a policy question: how is private health insurance take‐up in Australia affected by the income threshold at which the Medicare Levy Surcharge (MLS) kicks in? We propose a new difference deconvolution kernel estimator for the location and size of regression discontinuities. We also propose a bootstrapping procedure for estimating the confidence interval for the estimated discontinuity. Performance of the estimator is evaluated by Monte Carlo simulations before it is applied to estimating the effect of the income threshold of MLS on the take‐up of private health insurance in Australia, using contaminated data.  相似文献   

14.
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed‐effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi‐level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed‐effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real‐life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Recently, non‐uniform sampling has been suggested in microscopy to increase efficiency. More precisely, proportional to size (PPS) sampling has been introduced, where the probability of sampling a unit in the population is proportional to the value of an auxiliary variable. In the microscopy application, the sampling units are fields of view, and the auxiliary variables are easily observed approximations to the variables of interest. Unfortunately, often some auxiliary variables vanish, that is, are zero‐valued. Consequently, part of the population is inaccessible in PPS sampling. We propose a modification of the design based on a stratification idea, for which an optimal solution can be found, using a model‐assisted approach. The new optimal design also applies to the case where ‘vanish’ refers to missing auxiliary variables and has independent interest in sampling theory. We verify robustness of the new approach by numerical results, and we use real data to illustrate the applicability.  相似文献   

16.
For first‐time‐in‐human studies with small molecules alternating cross‐over designs are often employed and at study end are analyzed using linear models. We discuss the impact of including a period effect in the model on the precision with which dose level contrasts can be estimated and quantify the bias of least squares estimators if a period effect is inherent in the data that is not accounted for in the model. We also propose two alternative designs that allow a more precise estimation of dose level contrasts compared with the standard design when period effects are included in the model. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号