首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
In this article, we propose a flexible parametric (FP) approach for adjusting for covariate measurement errors in regression that can accommodate replicated measurements on the surrogate (mismeasured) version of the unobserved true covariate on all the study subjects or on a sub-sample of the study subjects as error assessment data. We utilize the general framework of the FP approach proposed by Hossain and Gustafson in 2009 for adjusting for covariate measurement errors in regression. The FP approach is then compared with the existing non-parametric approaches when error assessment data are available on the entire sample of the study subjects (complete error assessment data) considering covariate measurement error in a multiple logistic regression model. We also developed the FP approach when error assessment data are available on a sub-sample of the study subjects (partial error assessment data) and investigated its performance using both simulated and real life data. Simulation results reveal that, in comparable situations, the FP approach performs as good as or better than the competing non-parametric approaches in eliminating the bias that arises in the estimated regression parameters due to covariate measurement errors. Also, it results in better efficiency of the estimated parameters. Finally, the FP approach is found to perform adequately well in terms of bias correction, confidence coverage, and in achieving appropriate statistical power under partial error assessment data.  相似文献   

2.
Proportional hazard models for survival data, even though popular and numerically handy, suffer from the restrictive assumption that covariate effects are constant over survival time. A number of tests have been proposed to check this assumption. This paper contributes to this area by employing local estimates allowing to fit hazard models in which covariate effects are smoothly varying with time. A formal test is derived to check for proportional hazards against smooth hazards as alternative. The test proves to possess omnibus power in that it is powerful against arbitrary but smooth alternatives. Comparative simulations and two data examples accompany the presentation. Extensions are provided to multiple covariate settings, where the focus of interest is to decide which of the covariate effects vary with time.  相似文献   

3.
Abstract.  We propose covariate adjusted correlation (Cadcor) analysis to target the correlation between two hidden variables that are observed after being multiplied by an unknown function of a common observable confounding variable. The distorting effects of this confounding may alter the correlation relation between the hidden variables. Covariate adjusted correlation analysis enables consistent estimation of this correlation, by targeting the definition of correlation through the slopes of the regressions of the hidden variables on each other and by establishing a connection to varying-coefficient regression. The asymptotic distribution of the resulting adjusted correlation estimate is established. These distribution results, when combined with proposed consistent estimates of the asymptotic variance, lead to the construction of approximate confidence intervals and inference for adjusted correlations. We illustrate our approach through an application to the Boston house price data. Finite sample properties of the proposed procedures are investigated through a simulation study.  相似文献   

4.
Estimated associations between an outcome variable and misclassified covariates tend to be biased when the methods of estimation that ignore the classification error are applied. Available methods to account for misclassification often require the use of a validation sample (i.e. a gold standard). In practice, however, such a gold standard may be unavailable or impractical. We propose a Bayesian approach to adjust for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov Chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. A simulated numerical example and a real clinical example are given to illustrate the proposed approach. Our results suggest that the estimated log odds of inpatient care and the corresponding standard deviation are much larger in our proposed method compared with the models ignoring misclassification. Ignoring misclassification produces downwardly biased estimates and underestimate uncertainty.  相似文献   

5.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

6.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

7.
The study of a linear regression model with an interval-censored covariate, which was motivated by an acquired immunodeficiency syndrome (AIDS) clinical trial, was first proposed by Gómez et al. They developed a likelihood approach, together with a two-step conditional algorithm, to estimate the regression coefficients in the model. However, their method is inapplicable when the interval-censored covariate is continuous. In this article, we propose a novel and fast method to treat the continuous interval-censored covariate. By using logspline density estimation, we impute the interval-censored covariate with a conditional expectation. Then, the ordinary least-squares method is applied to the linear regression model with the imputed covariate. To assess the performance of the proposed method, we compare our imputation with the midpoint imputation and the semiparametric hierarchical method via simulations. Furthermore, an application to the AIDS clinical trial is presented.  相似文献   

8.
Investigators often gather longitudinal data to assess changes in responses over time within subjects and to relate these changes to within‐subject changes in predictors. Missing data are common in such studies and predictors can be correlated with subject‐specific effects. Maximum likelihood methods for generalized linear mixed models provide consistent estimates when the data are ‘missing at random’ (MAR) but can produce inconsistent estimates in settings where the random effects are correlated with one of the predictors. On the other hand, conditional maximum likelihood methods (and closely related maximum likelihood methods that partition covariates into between‐ and within‐cluster components) provide consistent estimation when random effects are correlated with predictors but can produce inconsistent covariate effect estimates when data are MAR. Using theory, simulation studies, and fits to example data this paper shows that decomposition methods using complete covariate information produce consistent estimates. In some practical cases these methods, that ostensibly require complete covariate information, actually only involve the observed covariates. These results offer an easy‐to‐use approach to simultaneously protect against bias from both cluster‐level confounding and MAR missingness in assessments of change.  相似文献   

9.
In this paper, asymptotic relative efficiency (ARE) of Wald tests for the Tweedie class of models with log-linear mean, is considered when the aux¬iliary variable is measured with error. Wald test statistics based on the naive maximum likelihood estimator and on a consistent estimator which is obtained by using Nakarnura's (1990) corrected score function approach are defined. As shown analytically, the Wald statistics based on the naive and corrected score function estimators are asymptotically equivalents in terms of ARE. On the other hand, the asymptotic relative efficiency of the naive and corrected Wald statistic with respect to the Wald statistic based on the true covariate equals to the square of the correlation between the unobserved and the observed co-variate. A small scale numerical Monte Carlo study and an example illustrate the small sample size situation.  相似文献   

10.
Covariate adjustment for the estimation of treatment effect for randomized controlled trials (RCT) is a simple approach with a long history, hence, its pros and cons have been well‐investigated and published in the literature. It is worthwhile to revisit this topic since recently there has been significant investigation and development on model assumptions, robustness to model mis‐specification, in particular, regarding the Neyman‐Rubin model and the average treatment effect estimand. This paper discusses key results of the investigation and development and their practical implication on pharmaceutical statistics. Accordingly, we recommend that appropriate covariate adjustment should be more widely used for RCTs for both hypothesis testing and estimation.  相似文献   

11.
Proportional intensity models are widely used for describing the relationship between the intensity of a counting process and associated covariates. A basic assumption in this model is the proportionality, that each covariate has a multiplicative effect on the intensity. We present and study tests for this assumption based on a score process which is equivalent to cumulative sums of the Schoenfeld residuals. Tests within principle power against any type of departure from proportionality can be constructed based on this score process. Among the tests studied, in particular an Anderson-Darling type test turns out to be very useful by having good power properties against general alternatives. A simulation study comparing various tests for proportionality indicates that this test seems to be a good choice for an omnibus test for proportionality.  相似文献   

12.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

13.
In this paper, we examine the performance of Anderson's classification statistic with covariate adjustment in comparison with the usual Anderson's classification statistic without covariate adjustment in a two-population normal covariate classification problem. The same problem has been investigated using different methods of comparison by some authors. See the bibliography. The aim of this paper is to give a direct comparison based upon the asymptotic probabilities of misclassification. It is shown that for large equal sample size of a training sample from each population, Anderson's classification statistic with covariate adjustment and cut-off point equal to zero, has better performance.  相似文献   

14.
This article deals with parameter estimation in the Cox proportional hazards model when covariates are measured with error. We consider both the classical additive measurement error model and a more general model which represents the mis-measured version of the covariate as an arbitrary linear function of the true covariate plus random noise. Only moment conditions are imposed on the distributions of the covariates and measurement error. Under the assumption that the covariates are measured precisely for a validation set, we develop a class of estimating equations for the vector-valued regression parameter by correcting the partial likelihood score function. The resultant estimators are proven to be consistent and asymptotically normal with easily estimated variances. Furthermore, a corrected version of the Breslow estimator for the cumulative hazard function is developed, which is shown to be uniformly consistent and, upon proper normalization, converges weakly to a zero-mean Gaussian process. Simulation studies indicate that the asymptotic approximations work well for practical sample sizes. The situation in which replicate measurements (instead of a validation set) are available is also studied.  相似文献   

15.
In the evaluation of efficacy of a vaccine to protect against disease caused by finitely many diverse infectious pathogens, it is often important to assess if vaccine protection depends on variations of the exposing pathogen. This problem can be formulated under a competing risks model where the endpoint event is the infection and the cause of failure is the infecting strain type determined after the infection is diagnosed. The strain-specific vaccine efficacy is defined as one minus the cause-specific hazard ratio (vaccine/placebo). This paper develops some simple procedures for testing if the vaccine affords protection against various strains and if and how the strain-specific vaccine efficacy depends on the type of exposing strain, adjusting for covariate effects. The Cox proportional hazards model is used to relate the cause-specific outcomes to explanatory variables. The finite sample properties of proposed tests are studied through simulations and are shown to have good performances. The tests developed are applied to the data collected from an oral cholera vaccine trial.  相似文献   

16.
The standard error of covariate-adjusted mean difference is not always smaller than that of the unadjusted mean difference despite the fact that adding a covariate is commonly believed to reduce the unexplained error variance. The covariate mean difference between the contrasted treatment conditions can inflate the standard error of the adjusted mean difference. If the covariate is viewed as randomly varying from one study to another, a minimum sample size can be found to attain a desired probability of reducing the standard error and the confidence interval width for the adjusted mean difference.  相似文献   

17.
This article proposes a simulation-based density estimation technique for time series that exploits information found in covariate data. The method can be paired with a large range of parametric models used in time series estimation. We derive asymptotic properties of the estimator and illustrate attractive finite sample properties for a range of well-known econometric and financial applications.  相似文献   

18.
The proportional hazards assumption of the Cox model does sometimes not hold in practise. An example is a treatment effect that decreases with time. We study a general multiplicative intensity model allowing the influence of each covariate to vary non-parametrically with time. An efficient estimation procedure for the cumulative parameter functions is developed. Its properties are studied using the martingale structure of the problem. Furthermore, we introduce a partly parametric version of the general non-parametric model in which the influence of some of the covariates varies with time while the effects of the remaining covariates are constant. This semiparametric model has not been studied in detail before. An efficient procedure for estimating the parametric as well as the non-parametric components of this model is developed. Again the martingale structure of the model allows us to describe the asymptotic properties of the suggested estimators. The approach is applied to two different data sets, and a Monte Carlo simulation is presented.  相似文献   

19.
Starting from an applied Bone Marrow Transplantation(BMT) study, the problem of “unexpected protectivity” in competing risks models is introduced, which occurs when one covariate shows a protective impact not expected from a medical perspective. Current explanations found in the statistical literature suggest that unexpected protectivity might be due to the lack of independence between the competing failures. Actually, in the presence of dependence, the Kaplan-Meier curves are not interpretable. Conversely, the cumulative incidence curves remain interpretable, and therefore seem to be a candidate for solving the problem. We discuss the particular nature of dependence in a competing risks framework and illustrate how this dependence may be created via a common frailty factor. A Monte Carlo experiment is set up which accounts also for the association between the observable covariates and the frailty factor. The aim of the experiment is to understand whether and how the bias showed by the estimates could be related to the omitted frailty variable. The results show that dependence alone does not cause false protectivity, and that the cumulative incidence curves suffer the same bias as the survival curves and therefore do not seem to be a solution to false protectivity. Conversely, false protectivity may occur according to the magnitude and the sign of the dependence between the frailty factor and the covariate. The paper ends with some suggestions for empirical research. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号