首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Biao Zhang 《Statistics》2016,50(5):1173-1194
Missing covariate data occurs often in regression analysis. We study methods for estimating the regression coefficients in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866] on regression analyses with missing covariates, in which they pioneered the use of two working models, the working propensity score model and the working conditional score model. A recent approach to missing covariate data analysis is the empirical likelihood method of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503], which effectively combines unbiased estimating equations. In this paper, we consider an alternative likelihood approach based on the full likelihood of the observed data. This full likelihood-based method enables us to generate estimators for the vector of the regression coefficients that are (a) asymptotically equivalent to those of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the working propensity score model is correctly specified, and (b) doubly robust, like the augmented inverse probability weighting (AIPW) estimators of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Am Statist Assoc. 1994;89:846–866]. Thus, the proposed full likelihood-based estimators improve on the efficiency of the AIPW estimators when the working propensity score model is correct but the working conditional score model is possibly incorrect, and also improve on the empirical likelihood estimators of Qin, Zhang and Leung [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the reverse is true, that is, the working conditional score model is correct but the working propensity score model is possibly incorrect. In addition, we consider a regression method for estimation of the regression coefficients when the working conditional score model is correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866]. Finally, we compare the finite-sample performance of various estimators in a simulation study.  相似文献   

2.
This article deals with parameter estimation in the Cox proportional hazards model when covariates are measured with error. We consider both the classical additive measurement error model and a more general model which represents the mis-measured version of the covariate as an arbitrary linear function of the true covariate plus random noise. Only moment conditions are imposed on the distributions of the covariates and measurement error. Under the assumption that the covariates are measured precisely for a validation set, we develop a class of estimating equations for the vector-valued regression parameter by correcting the partial likelihood score function. The resultant estimators are proven to be consistent and asymptotically normal with easily estimated variances. Furthermore, a corrected version of the Breslow estimator for the cumulative hazard function is developed, which is shown to be uniformly consistent and, upon proper normalization, converges weakly to a zero-mean Gaussian process. Simulation studies indicate that the asymptotic approximations work well for practical sample sizes. The situation in which replicate measurements (instead of a validation set) are available is also studied.  相似文献   

3.
Recurrent event data arise commonly in medical and public health studies. The analysis of such data has received extensive research attention and various methods have been developed in the literature. Depending on the focus of scientific interest, the methods may be broadly classified as intensity‐based counting process methods, mean function‐based estimating equation methods, and the analysis of times to events or times between events. These methods and models cover a wide variety of practical applications. However, there is a critical assumption underlying those methods–variables need to be correctly measured. Unfortunately, this assumption is frequently violated in practice. It is quite common that some covariates are subject to measurement error. It is well known that covariate measurement error can substantially distort inference results if it is not properly taken into account. In the literature, there has been extensive research concerning measurement error problems in various settings. However, with recurrent events, there is little discussion on this topic. It is the objective of this paper to address this important issue. In this paper, we develop inferential methods which account for measurement error in covariates for models with multiplicative intensity functions or rate functions. Both likelihood‐based inference and robust inference based on estimating equations are discussed. The Canadian Journal of Statistics 40: 530–549; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
Estimating equations which are not necessarily likelihood-based score equations are becoming increasingly popular for estimating regression model parameters. This paper is concerned with estimation based on general estimating equations when true covariate data are missing for all the study subjects, but surrogate or mismeasured covariates are available instead. The method is motivated by the covariate measurement error problem in marginal or partly conditional regression of longitudinal data. We propose to base estimation on the expectation of the complete data estimating equation conditioned on available data. The regression parameters and other nuisance parameters are estimated simultaneously by solving the resulting estimating equations. The expected estimating equation (EEE) estimator is equal to the maximum likelihood estimator if the complete data scores are likelihood scores and conditioning is with respect to all the available data. A pseudo-EEE estimator, which requires less computation, is also investigated. Asymptotic distribution theory is derived. Small sample simulations are conducted when the error process is an order 1 autoregressive model. Regression calibration is extended to this setting and compared with the EEE approach. We demonstrate the methods on data from a longitudinal study of the relationship between childhood growth and adult obesity.  相似文献   

5.
There has been extensive interest in discussing inference methods for survival data when some covariates are subject to measurement error. It is known that standard inferential procedures produce biased estimation if measurement error is not taken into account. With the Cox proportional hazards model a number of methods have been proposed to correct bias induced by measurement error, where the attention centers on utilizing the partial likelihood function. It is also of interest to understand the impact on estimation of the baseline hazard function in settings with mismeasured covariates. In this paper we employ a weakly parametric form for the baseline hazard function and propose simple unbiased estimating functions for estimation of parameters. The proposed method is easy to implement and it reveals the connection between the naive method ignoring measurement error and the corrected method with measurement error accounted for. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error in covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study [Knuiman, M.W., Cullent, K.J., Bulsara, M.K., Welborn, T.A., Hobbs, M.S.T., 1994. Mortality trends, 1965 to 1989, in Busselton, the site of repeated health surveys and interventions. Austral. J. Public Health 18, 129–135].  相似文献   

6.
In measurement error problems, two major and consistent estimation methods are the conditional score and the corrected score. They are functional methods that require no parametric assumptions on mismeasured covariates. The conditional score requires that a suitable sufficient statistic for the mismeasured covariate can be found, while the corrected score requires that the object score function can be estimated without bias. These assumptions limit their ranges of applications. The extensively corrected score proposed here is an extension of the corrected score. It yields consistent estimations in many cases when neither the conditional score nor the corrected score is feasible. We demonstrate its constructions in generalized linear models and the Cox proportional hazards model, assess its performances by simulation studies and illustrate its implementations by two real examples.  相似文献   

7.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

8.
We consider estimating the mode of a response given an error‐prone covariate. It is shown that ignoring measurement error typically leads to inconsistent inference for the conditional mode of the response given the true covariate, as well as misleading inference for regression coefficients in the conditional mode model. To account for measurement error, we first employ the Monte Carlo corrected score method (Novick & Stefanski, 2002) to obtain an unbiased score function based on which the regression coefficients can be estimated consistently. To relax the normality assumption on measurement error this method requires, we propose another method where deconvoluting kernels are used to construct an objective function that is maximized to obtain consistent estimators of the regression coefficients. Besides rigorous investigation on asymptotic properties of the new estimators, we study their finite sample performance via extensive simulation experiments, and find that the proposed methods substantially outperform a naive inference method that ignores measurement error. The Canadian Journal of Statistics 47: 262–280; 2019 © 2019 Statistical Society of Canada  相似文献   

9.
Two different forms of Akaike's information criterion (AIC) are compared for selecting the smooth terms in penalized spline additive mixed models. The conditional AIC (cAIC) has been used traditionally as a criterion for both estimating penalty parameters and selecting covariates in smoothing, and is based on the conditional likelihood given the smooth mean and on the effective degrees of freedom for a model fit. By comparison, the marginal AIC (mAIC) is based on the marginal likelihood from the mixed‐model formulation of penalized splines which has recently become popular for estimating smoothing parameters. To the best of the authors' knowledge, the use of mAIC for selecting covariates for smoothing in additive models is new. In the competing models considered for selection, covariates may have a nonlinear effect on the response, with the possibility of group‐specific curves. Simulations are used to compare the performance of cAIC and mAIC in model selection settings that have correlated and hierarchical smooth terms. In moderately large samples, both formulations of AIC perform extremely well at detecting the function that generated the data. The mAIC does better for simple functions, whereas the cAIC is more sensitive to detecting a true model that has complex and hierarchical terms.  相似文献   

10.
Abstract.  We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerate distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicates for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c- myc expression level, is subject to measurement error.  相似文献   

11.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

12.
Nakamura (1990) introduced an approach to estimation in measurement error models based on a corrected score function, and claimed that the estimators obtained are consistent for functional models. Proof of the claim essentially assumed the existence of a corrected log-likelihood for which differentiation with respect to model parameters can be interchanged with conditional expectation taken with respect to the measurement error distributions, given the response variables and true covariates. This paper deals with simple yet practical models for which the above assumption is false, i.e. a corrected score function for the model may not be obtained through differentiating a corrected log-likelihood although it exists. Alternative regularity conditions with no reference to log-likelihood are given, under which the corrected score functions yield consistent and asymptotically normal estimators. Application to functional comparative calibration yields interesting results.  相似文献   

13.
We consider statistical inference of unknown parameters in estimating equations (EEs) when some covariates have nonignorably missing values, which is quite common in practice but has rarely been discussed in the literature. When an instrument, a fully observed covariate vector that helps identifying parameters under nonignorable missingness, is available, the conditional distribution of the missing covariates given other covariates can be estimated by the pseudolikelihood method of Zhao and Shao [(2015), ‘Semiparametric pseudo likelihoods in generalised linear models with nonignorable missing data’, Journal of the American Statistical Association, 110, 1577–1590)] and be used to construct unbiased EEs. These modified EEs then constitute a basis for valid inference by empirical likelihood. Our method is applicable to a wide range of EEs used in practice. It is semiparametric since no parametric model for the propensity of missing covariate data is assumed. Asymptotic properties of the proposed estimator and the empirical likelihood ratio test statistic are derived. Some simulation results and a real data analysis are presented for illustration.  相似文献   

14.
Liang and Zeger (1986) introduced a class of estimating equations that gives consistent estimates of regression parameters and of their asymptotic variances in the class of generalized linear models for cluster correlated data. When the independent variables or covariates in such models are subject to measurement errors, the parameter estimates obtained from these estimating equations are no longer consistent. To correct for the effect of measurement errors, an estimator with smaller asymptotic bias is constructed along the lines of Stefanski (1985), assuming that the measurement error variance is either known or estimable. The asymptotic distribution of the bias-corrected estimator and a consistent estimator of its asymptotic variance are also given. The special case of a binary logistic regression model is studied in detail. For this case, methods based on conditional scores and quasilikelihood are also extended to cluster correlated data. Results of a small simulation study on the performance of the proposed estimators and associated tests of hypotheses are reported.  相似文献   

15.
Summary. In many biomedical studies, covariates are subject to measurement error. Although it is well known that the regression coefficients estimators can be substantially biased if the measurement error is not accommodated, there has been little study of the effect of covariate measurement error on the estimation of the dependence between bivariate failure times. We show that the dependence parameter estimator in the Clayton–Oakes model can be considerably biased if the measurement error in the covariate is not accommodated. In contrast with the typical bias towards the null for marginal regression coefficients, the dependence parameter can be biased in either direction. We introduce a bias reduction technique for the bivariate survival function in copula models while assuming an additive measurement error model and replicated measurement for the covariates, and we study the large and small sample properties of the dependence parameter estimator proposed.  相似文献   

16.
In a study comparing the effects of two treatments, the propensity score is the probability of assignment to one treatment conditional on a subject's measured baseline covariates. Propensity-score matching is increasingly being used to estimate the effects of exposures using observational data. In the most common implementation of propensity-score matching, pairs of treated and untreated subjects are formed whose propensity scores differ by at most a pre-specified amount (the caliper width). There has been a little research into the optimal caliper width. We conducted an extensive series of Monte Carlo simulations to determine the optimal caliper width for estimating differences in means (for continuous outcomes) and risk differences (for binary outcomes). When estimating differences in means or risk differences, we recommend that researchers match on the logit of the propensity score using calipers of width equal to 0.2 of the standard deviation of the logit of the propensity score. When at least some of the covariates were continuous, then either this value, or one close to it, minimized the mean square error of the resultant estimated treatment effect. It also eliminated at least 98% of the bias in the crude estimator, and it resulted in confidence intervals with approximately the correct coverage rates. Furthermore, the empirical type I error rate was approximately correct. When all of the covariates were binary, then the choice of caliper width had a much smaller impact on the performance of estimation of risk differences and differences in means.  相似文献   

17.
A method is proposed for estimating regression parameters from data containing covariate measurement errors by using Stein estimates of the unobserved true covariates. The method produces consistent estimates for the slope parameter in the classical linear errors-in-variables model and applies to a broad range of nonlinear regression problems, provided the measurement error is Gaussian with known variance. Simulations are used to examine the performance of the estimates in a nonlinear regression problem and to compare them with the usual naive ones obtained by ignoring error and with other estimates proposed recently in the literature.  相似文献   

18.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

19.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

20.
We introduce the method of estimating functions to study the class of autoregressive conditional heteroscedasticity (ARCH) models. We derive the optimal estimating functions by combining linear and quadratic estimating functions. The resultant estimators are more efficient than the quasi-maximum likelihood estimator. If the assumption of conditional normality is imposed, the estimator obtained by using the theory of estimating functions is identical to that obtained by using the maximum likelihood method in finite samples. The relative efficiencies of the estimating function (EF) approach in comparison with the quasi-maximum likelihood estimator are developed. We illustrate the EF approach using a univariate GARCH(1,1) model with conditional normal, Student-t, and gamma distributions. The efficiency benefits of the EF approach relative to the quasi-maximum likelihood approach are substantial for the gamma distribution with large skewness. Simulation analysis shows that the finite-sample properties of the estimators from the EF approach are attractive. EF estimators tend to display less bias and root mean squared error than the quasi-maximum likelihood estimator. The efficiency gains are substantial for highly nonnormal distributions. An example demonstrates that implementation of the method is straightforward.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号