首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. Continuous proportional outcomes are collected from many practical studies, where responses are confined within the unit interval (0,1). Utilizing Barndorff‐Nielsen and Jørgensen's simplex distribution, we propose a new type of generalized linear mixed‐effects model for longitudinal proportional data, where the expected value of proportion is directly modelled through a logit function of fixed and random effects. We establish statistical inference along the lines of Breslow and Clayton's penalized quasi‐likelihood (PQL) and restricted maximum likelihood (REML) in the proposed model. We derive the PQL/REML using the high‐order multivariate Laplace approximation, which gives satisfactory estimation of the model parameters. The proposed model and inference are illustrated by simulation studies and a data example. The simulation studies conclude that the fourth order approximate PQL/REML performs satisfactorily. The data example shows that Aitchison's technique of the normal linear mixed model for logit‐transformed proportional outcomes is not robust against outliers.  相似文献   

2.
Informative identification of the within‐subject correlation is essential in longitudinal studies in order to forecast the trajectory of each subject and improve the validity of inferences. In this paper, we fit this correlation structure by employing a time adaptive autoregressive error process. Such a process can automatically accommodate irregular and possibly subject‐specific observations. Based on the fitted correlation structure, we propose an efficient two‐stage estimator of the unknown coefficient functions by using a local polynomial approximation. This procedure does not involve within‐subject covariance matrices and hence circumvents the instability of calculating their inverses. The asymptotic normality of resulting estimators is established. Numerical experiments were conducted to check the finite sample performance of our method and an example of an application involving a set of medical data is also illustrated.  相似文献   

3.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   

4.
5.
6.
7.
Motivated by the need to analyze the National Longitudinal Surveys data, we propose a new semiparametric longitudinal mean‐covariance model in which the effects on dependent variable of some explanatory variables are linear and others are non‐linear, while the within‐subject correlations are modelled by a non‐stationary autoregressive error structure. We develop an estimation machinery based on least squares technique by approximating non‐parametric functions via B‐spline expansions and establish the asymptotic normality of parametric estimators as well as the rate of convergence for the non‐parametric estimators. We further advocate a new model selection strategy in the varying‐coefficient model framework, for distinguishing whether a component is significant and subsequently whether it is linear or non‐linear. Besides, the proposed method can also be employed for identifying the true order of lagged terms consistently. Monte Carlo studies are conducted to examine the finite sample performance of our approach, and an application of real data is also illustrated.  相似文献   

8.
9.
Longitudinal count data with excessive zeros frequently occur in social, biological, medical, and health research. To model such data, zero-inflated Poisson (ZIP) models are commonly used, after separating zero and positive responses. As longitudinal count responses are likely to be serially correlated, such separation may destroy the underlying serial correlation structure. To overcome this problem recently observation- and parameter-driven modelling approaches have been proposed. In the observation-driven model, the response at a specific time point is modelled through the responses at previous time points after incorporating serial correlation. One limitation of the observation-driven model is that it fails to accommodate the presence of any possible over-dispersion, which frequently occurs in the count responses. This limitation is overcome in a parameter-driven model, where the serial correlation is captured through the latent process using random effects. We compare the results obtained by the two models. A quasi-likelihood approach has been developed to estimate the model parameters. The methodology is illustrated with analysis of two real life datasets. To examine model performance the models are also compared through a simulation study.  相似文献   

10.
Investigators often gather longitudinal data to assess changes in responses over time within subjects and to relate these changes to within‐subject changes in predictors. Missing data are common in such studies and predictors can be correlated with subject‐specific effects. Maximum likelihood methods for generalized linear mixed models provide consistent estimates when the data are ‘missing at random’ (MAR) but can produce inconsistent estimates in settings where the random effects are correlated with one of the predictors. On the other hand, conditional maximum likelihood methods (and closely related maximum likelihood methods that partition covariates into between‐ and within‐cluster components) provide consistent estimation when random effects are correlated with predictors but can produce inconsistent covariate effect estimates when data are MAR. Using theory, simulation studies, and fits to example data this paper shows that decomposition methods using complete covariate information produce consistent estimates. In some practical cases these methods, that ostensibly require complete covariate information, actually only involve the observed covariates. These results offer an easy‐to‐use approach to simultaneously protect against bias from both cluster‐level confounding and MAR missingness in assessments of change.  相似文献   

11.
In applied statistical data analysis, overdispersion is a common feature. It can be addressed using both multiplicative and additive random effects. A multiplicative model for count data incorporates a gamma random effect as a multiplicative factor into the mean, whereas an additive model assumes a normally distributed random effect, entered into the linear predictor. Using Bayesian principles, these ideas are applied to longitudinal count data, based on the so-called combined model. The performance of the additive and multiplicative approaches is compared using a simulation study.  相似文献   

12.
Since Dorfman's seminal work on the subject, group testing has been widely adopted in epidemiological studies. In Dorfman's context of detecting syphilis, group testing entails pooling blood samples and testing the pools, as opposed to testing individual samples. A negative pool indicates all individuals in the pool free of syphilis antigen, whereas a positive pool suggests one or more individuals carry the antigen. With covariate information collected, researchers have considered regression models that allow one to estimate covariate‐adjusted disease probability. We study maximum likelihood estimators of covariate effects in these regression models when the group testing response is prone to error. We show that, when compared with inference drawn from individual testing data, inference based on group testing data can be more resilient to response misclassification in terms of bias and efficiency. We provide valuable guidance on designing the group composition to alleviate adverse effects of misclassification on statistical inference.  相似文献   

13.
14.
15.
16.
17.
18.
19.
20.
Joint likelihood approaches have been widely used to handle survival data with time-dependent covariates. In construction of the joint likelihood function for the accelerated failure time (AFT) model, the unspecified baseline hazard function is assumed to be a piecewise constant function in the literature. However, there are usually no close form formulas for the regression parameters, which require numerical methods in the EM iterations. The nonsmooth step function assumption leads to very spiky likelihood function which is very hard to find the globe maximum. Besides, due to nonsmoothness of the likelihood function, direct search methods are conducted for the maximization which are very inefficient and time consuming. To overcome the two disadvantages, we propose a kernel smooth pseudo-likelihood function to replace the nonsmooth step function assumption. The performance of the proposed method is evaluated by simulation studies. A case study of reproductive egg-laying data is provided to demonstrate the usefulness of the new approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号