首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Binary dynamic fixed and mixed logit models are extensively studied in the literature. These models are developed to examine the effects of certain fixed covariates through a parametric regression function as a part of the models. However, there are situations where one may like to consider more covariates in the model but their direct effect is not of interest. In this paper we propose a generalization of the existing binary dynamic logit (BDL) models to the semi-parametric longitudinal setup to address this issue of additional covariates. The regression function involved in such a semi-parametric BDL model contains (i) a parametric linear regression function in some primary covariates, and (ii) a non-parametric function in certain secondary covariates. We use a simple semi-parametric conditional quasi-likelihood approach for consistent estimation of the non-parametric function, and a semi-parametric likelihood approach for the joint estimation of the main regression and dynamic dependence parameters of the model. The finite sample performance of the estimation approaches is examined through a simulation study. The asymptotic properties of the estimators are also discussed. The proposed model and the estimation approaches are illustrated by reanalysing a longitudinal infectious disease data.  相似文献   

2.
The use of the cumulative average model to investigate the association between disease incidence and repeated measurements of exposures in medical follow-up studies can be dated back to the 1960s (Kahn and Dawber, J Chron Dis 19:611–620, 1966). This model takes advantage of all prior data and thus should provide a statistically more powerful test of disease-exposure associations. Measurement error in covariates is common for medical follow-up studies. Many methods have been proposed to correct for measurement error. To the best of our knowledge, no methods have been proposed yet to correct for measurement error in the cumulative average model. In this article, we propose a regression calibration approach to correct relative risk estimates for measurement error. The approach is illustrated with data from the Nurses’ Health Study relating incident breast cancer between 1980 and 2002 to time-dependent measures of calorie-adjusted saturated fat intake, controlling for total caloric intake, alcohol intake, and baseline age.  相似文献   

3.
《Econometric Reviews》2012,31(1):92-109
Abstract

This paper provides several new results on identification of the linear factor model. The model allows for correlated latent factors and dependence among the idiosyncratic errors. I also illustrate identification under a dedicated measurement structure and other reduced rank restrictions. I use these results to study identification in a model with both observed covariates and latent factors. The analysis emphasizes the different roles played by restrictions on the error covariance matrix, restrictions on the factor loadings and the factor covariance matrix, and restrictions on the coefficients on covariates. The identification results are simple, intuitive, and directly applicable to many settings.  相似文献   

4.
Two-sample comparison problems are often encountered in practical projects and have widely been studied in literature. Owing to practical demands, the research for this topic under special settings such as a semiparametric framework have also attracted great attentions. Zhou and Liang (Biometrika 92:271–282, 2005) proposed an empirical likelihood-based semi-parametric inference for the comparison of treatment effects in a two-sample problem with censored data. However, their approach is actually a pseudo-empirical likelihood and the method may not be fully efficient. In this study, we develop a new empirical likelihood-based inference under more general framework by using the hazard formulation of censored data for two sample semi-parametric hybrid models. We demonstrate that our empirical likelihood statistic converges to a standard chi-squared distribution under the null hypothesis. We further illustrate the use of the proposed test by testing the ROC curve with censored data, among others. Numerical performance of the proposed method is also examined.  相似文献   

5.
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different estimator. Finally, we introduce and implement a test-statistic for the hypothesis of a time-constant effect in both the non-parametric and semi-parametric model. A small simulation study was performed to evaluate the performance of the new estimator of the standard error.  相似文献   

6.
The aim of this study is to assess the biases of a Food Frequency Questionnaire (FFQ) by comparing total energy intake (TEI) with total energy expenditure (TEE) obtained from doubly labelled water(DLW) biomarker after adjusting measurement errors in DLW. We develop several Bayesian hierarchical measurement error models of DLW with different distributional assumptions on TEI to obtain precise bias estimates of TEI. Inference is carried out by using MCMC simulation techniques in a fully Bayesian framework, and model comparisons are done via the mean square predictive error. Our results showed that the joint model with random effects under the Gamma distribution is the best fit model in terms of the MSPE and residual diagnostics, in which bias in TEI is not significant based on the 95% credible interval. The Canadian Journal of Statistics 38: 506–516; 2010 © 2010 Statistical Society of Canada  相似文献   

7.
This paper proposes a varying-coefficient single-index measurement error model, which consists of measurement error in the index covariates. We combine the simulation-extrapolation technique, the local linear regression and the weighted least-squares method to estimate the unknowns of the current model, and develop the asymptotic properties of the resulting estimators under some conditions. A simulation study is conducted to evaluate the proposed methodology, and a real example is also studied to illustrate our given methodology.  相似文献   

8.
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis–Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.  相似文献   

9.
We consider mixed effects models for longitudinal, repeated measures or clustered data. Unmeasured or omitted covariates in such models may be correlated with the included covanates, and create model violations when not taken into account. Previous research and experience with longitudinal data sets suggest a general form of model which should be considered when omitted covariates are likely, such as in observational studies. We derive the marginal model between the response variable and included covariates, and consider model fitting using the ordinary and weighted least squares methods, which require simple non-iterative computation and no assumptions on the distribution of random covariates or error terms, Asymptotic properties of the least squares estimators are also discussed. The results shed light on the structure of least squares estimators in mixed effects models, and provide large sample procedures for statistical inference and prediction based on the marginal model. We present an example of the relationship between fluid intake and output in very low birth weight infants, where the model is found to have the assumed structure.  相似文献   

10.
Summary. In many biomedical studies, covariates are subject to measurement error. Although it is well known that the regression coefficients estimators can be substantially biased if the measurement error is not accommodated, there has been little study of the effect of covariate measurement error on the estimation of the dependence between bivariate failure times. We show that the dependence parameter estimator in the Clayton–Oakes model can be considerably biased if the measurement error in the covariate is not accommodated. In contrast with the typical bias towards the null for marginal regression coefficients, the dependence parameter can be biased in either direction. We introduce a bias reduction technique for the bivariate survival function in copula models while assuming an additive measurement error model and replicated measurement for the covariates, and we study the large and small sample properties of the dependence parameter estimator proposed.  相似文献   

11.
A flexible Bayesian semiparametric accelerated failure time (AFT) model is proposed for analyzing arbitrarily censored survival data with covariates subject to measurement error. Specifically, the baseline error distribution in the AFT model is nonparametrically modeled as a Dirichlet process mixture of normals. Classical measurement error models are imposed for covariates subject to measurement error. An efficient and easy-to-implement Gibbs sampler, based on the stick-breaking formulation of the Dirichlet process combined with the techniques of retrospective and slice sampling, is developed for the posterior calculation. An extensive simulation study is conducted to illustrate the advantages of our approach.  相似文献   

12.
Semiparametric transformation model has been extensively investigated in the literature. The model, however, has little dealt with survival data with cure fraction. In this article, we consider a class of semi-parametric transformation models, where an unknown transformation of the survival times with cure fraction is assumed to be linearly related to the covariates and the error distributions are parametrically specified by an extreme value distribution with unknown parameters. Estimators for the coefficients of covariates are obtained from pseudo Z-estimator procedures allowing censored observations. We show that the estimators are consistent and asymptotically normal. The bootstrap estimation of the variances of the estimators is also investigated.  相似文献   

13.
We propose a new model for conditional covariances based on predetermined idiosyncratic shocks as well as macroeconomic and own information instruments. The specification ensures positive definiteness by construction, is unique within the class of linear functions for our covariance decomposition, and yields a simple yet rich model of covariances. We introduce a property, invariance to variate order, that assures estimation is not impacted by a simple reordering of the variates in the system. Simulation results using realized covariances show smaller mean absolute errors (MAE) and root mean square errors (RMSE) for every element of the covariance matrix relative to a comparably specified BEKK model with own information instruments. We also find a smaller mean absolute percentage error (MAPE) and root mean square percentage error (RMSPE) for the entire covariance matrix. Supplementary materials for practitioners as well as all Matlab code used in the article are available online.  相似文献   

14.
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods.  相似文献   

15.
ABSTRACT

We study the method for generating pseudo random numbers under various special cases of the Cox model with time-dependent covariates when the baseline hazard function may not be constant and the random variable may equal infinity with a positive probability. During our simulation studies in computing the partial likelihood estimates, in between 3% and 20% of the time with a moderate sample size, it happens that the partial likelihood estimate of the regression coefficient is ∞ for the data from the Cox model. We propose a semi-parametric estimator as a modification for such a case. We present simulation results on the asymptotic properties of the semi-parametric estimator.  相似文献   

16.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

17.
Estimating equations which are not necessarily likelihood-based score equations are becoming increasingly popular for estimating regression model parameters. This paper is concerned with estimation based on general estimating equations when true covariate data are missing for all the study subjects, but surrogate or mismeasured covariates are available instead. The method is motivated by the covariate measurement error problem in marginal or partly conditional regression of longitudinal data. We propose to base estimation on the expectation of the complete data estimating equation conditioned on available data. The regression parameters and other nuisance parameters are estimated simultaneously by solving the resulting estimating equations. The expected estimating equation (EEE) estimator is equal to the maximum likelihood estimator if the complete data scores are likelihood scores and conditioning is with respect to all the available data. A pseudo-EEE estimator, which requires less computation, is also investigated. Asymptotic distribution theory is derived. Small sample simulations are conducted when the error process is an order 1 autoregressive model. Regression calibration is extended to this setting and compared with the EEE approach. We demonstrate the methods on data from a longitudinal study of the relationship between childhood growth and adult obesity.  相似文献   

18.
Nested error linear regression models using survey weights have been studied in small area estimation to obtain efficient model‐based and design‐consistent estimators of small area means. The covariates in these nested error linear regression models are not subject to measurement errors. In practical applications, however, there are many situations in which the covariates are subject to measurement errors. In this paper, we develop a nested error linear regression model with an area‐level covariate subject to functional measurement error. In particular, we propose a pseudo‐empirical Bayes (PEB) predictor to estimate small area means. This predictor borrows strength across areas through the model and makes use of the survey weights to preserve the design consistency as the area sample size increases. We also employ a jackknife method to estimate the mean squared prediction error (MSPE) of the PEB predictor. Finally, we report the results of a simulation study on the performance of our PEB predictor and associated jackknife MSPE estimator.  相似文献   

19.
In many practical applications, high-dimensional regression analyses have to take into account measurement error in the covariates. It is thus necessary to extend regularization methods, that can handle the situation where the number of covariates p largely exceed the sample size n, to the case in which covariates are also mismeasured. A variety of methods are available in this context, but many of them rely on knowledge about the measurement error and the structure of its covariance matrix. In this paper, we set the goal to compare some of these methods, focusing on situations relevant for practical applications. In particular, we will evaluate these methods in setups in which the measurement error distribution and dependence structure are not known and have to be estimated from data. Our focus is on variable selection, and the evaluation is based on extensive simulations.  相似文献   

20.
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semi-parametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号