首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As researchers increasingly rely on linear mixed models to characterize longitudinal data, there is a need for improved techniques for selecting among this class of models which requires specification of both fixed and random effects via a mean model and variance-covariance structure. The process is further complicated when fixed and/or random effects are non nested between models. This paper explores the development of a hypothesis test to compare non nested linear mixed models based on extensions of the work begun by Sir David Cox. We assess the robustness of this approach for comparing models containing correlated measures of body fat for predicting longitudinal cardiometabolic risk.  相似文献   

2.
The Poisson distribution is a benchmark for modeling count data. Its equidispersion constraint, however, does not accurately represent real data. Most real datasets express overdispersion; hence attention in the statistics community focuses on associated issues. More examples are surfacing, however, that display underdispersion, warranting the need to highlight this phenomenon and bring more attention to those models that can better describe such data structures. This work addresses various sources of data underdispersion and surveys several distributions that can model underdispersed data, comparing their performance on applied datasets.  相似文献   

3.
A characterization of GLMs is given. Modification of the Gaussian GEE1, modified GEE1, was applied to heteroscedastic longitudinal data, to which linear mixed-effects models are usually applied. The modified GEE1 models scale multivariate data to homoscedastic data maintaining the correlation structure and apply usual GEE1 to homoscedastic data, which needs no-diagnostics for diagonal variances. Relationships among multivariate linear regression methods, ordinary/generalized LS, naïve/modified GEE1, and linear mixed-effects models were discussed. An application showed modified GEE1 gave most efficient parameter estimation. Correct specification of the main diagonals of heteroscedastic data variance appears to be more important for efficient mean parameter estimation.  相似文献   

4.
The omission of important variables is a well‐known model specification issue in regression analysis and mixed linear models. The author considers longitudinal data models that are special cases of the mixed linear models; in particular, they are linear models of repeated observations on a subject. Models of omitted variables have origins in both the econometrics and biostatistics literatures. The author describes regression coefficient estimators that are robust to and that provide the basis for detecting the influence of certain types of omitted variables. New robust estimators and omitted variable tests are introduced and illustrated with a case study that investigates the determinants of tax liability.  相似文献   

5.
Analysis of longitudinal data using a general linear mixed model requires the specification of a form for the covariance matrix of within-subject observations. Graphical diagnostics, such as the scatterplot matrix, can be of substantial help in making this specification. I introduce another graphical diagnostic, the Partial-Regression-on-Intervenors Scatterplot Matrix (PRISM), which complements the ordinary scatterplot matrix and which is more useful for identifying certain kinds of correlation structures. PRISMs corresponding to several commonly used correlation structures are displayed. The PRISM's usefulness in model specification is illustrated with an example of longitudinal data from a 100-kilometer road race.  相似文献   

6.
In longitudinal data analysis, efficient estimation of regression coefficients requires a correct specification of certain covariance structure, and efficient estimation of covariance matrix requires a correct specification of mean regression model. In this article, we propose a general semiparametric model for the mean and the covariance simultaneously using the modified Cholesky decomposition. A regression spline-based approach within the framework of generalized estimating equations is proposed to estimate the parameters in the mean and the covariance. Under regularity conditions, asymptotic properties of the resulting estimators are established. Extensive simulation is conducted to investigate the performance of the proposed estimator and in the end a real data set is analysed using the proposed approach.  相似文献   

7.
Longitudinal data often require a combination of flexible time trends and individual-specific random effects. For example, our methodological developments are motivated by a study on longitudinal body mass index profiles of children collected with the aim to gain a better understanding of factors driving childhood obesity. The high amount of nonlinearity and heterogeneity in these data and the complexity of the data set with a large number of observations, long longitudinal profiles and clusters of observations with specific deviations from the population model make the application challenging and prevent the application of standard growth curve models. We propose a fully Bayesian approach based on Markov chain Monte Carlo simulation techniques that allows for the semiparametric specification of both the trend function and the random effects distribution. Bayesian penalized splines are considered for the former, while a Dirichlet process mixture (DPM) specification allows for an adaptive amount of deviations from normality for the latter. The advantages of such DPM prior structures for random effects are investigated in terms of a simulation study to improve the understanding of the model specification before analyzing the childhood obesity data.  相似文献   

8.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

9.
The purpose of this paper is to develop a new linear regression model for count data, namely generalized-Poisson Lindley (GPL) linear model. The GPL linear model is performed by applying generalized linear model to GPL distribution. The model parameters are estimated by the maximum likelihood estimation. We utilize the GPL linear model to fit two real data sets and compare it with the Poisson, negative binomial (NB) and Poisson-weighted exponential (P-WE) models for count data. It is found that the GPL linear model can fit over-dispersed count data, and it shows the highest log-likelihood, the smallest AIC and BIC values. As a consequence, the linear regression model from the GPL distribution is a valuable alternative model to the Poisson, NB, and P-WE models.  相似文献   

10.
This paper proposes a method to assess the local influence in a minor perturbation of a statistical model with incomplete data. The idea is to utilize Cook's approach to the conditional expectation of the complete-data log-likelihood function in the EM algorithm. It is shown that the method proposed produces analytic results that are very similar to those obtained from a classical local influence approach based on the observed data likelihood function and has the potential to assess a variety of complicated models that cannot be handled by existing methods. An application to the generalized linear mixed model is investigated. Some illustrative artificial and real examples are presented.  相似文献   

11.
Generalized linear models with random effects and/or serial dependence are commonly used to analyze longitudinal data. However, the computation and interpretation of marginal covariate effects can be difficult. This led Heagerty (1999, 2002) to propose models for longitudinal binary data in which a logistic regression is first used to explain the average marginal response. The model is then completed by introducing a conditional regression that allows for the longitudinal, within‐subject, dependence, either via random effects or regressing on previous responses. In this paper, the authors extend the work of Heagerty to handle multivariate longitudinal binary response data using a triple of regression models that directly model the marginal mean response while taking into account dependence across time and across responses. Markov Chain Monte Carlo methods are used for inference. Data from the Iowa Youth and Families Project are used to illustrate the methods.  相似文献   

12.
Linear mixed models are regularly applied to animal and plant breeding data to evaluate genetic potential. Residual maximum likelihood (REML) is the preferred method for estimating variance parameters associated with this type of model. Typically an iterative algorithm is required for the estimation of variance parameters. Two algorithms which can be used for this purpose are the expectation‐maximisation (EM) algorithm and the parameter expanded EM (PX‐EM) algorithm. Both, particularly the EM algorithm, can be slow to converge when compared to a Newton‐Raphson type scheme such as the average information (AI) algorithm. The EM and PX‐EM algorithms require specification of the complete data, including the incomplete and missing data. We consider a new incomplete data specification based on a conditional derivation of REML. We illustrate the use of the resulting new algorithm through two examples: a sire model for lamb weight data and a balanced incomplete block soybean variety trial. In the cases where the AI algorithm failed, a REML PX‐EM based on the new incomplete data specification converged in 28% to 30% fewer iterations than the alternative REML PX‐EM specification. For the soybean example a REML EM algorithm using the new specification converged in fewer iterations than the current standard specification of a REML PX‐EM algorithm. The new specification integrates linear mixed models, Henderson's mixed model equations, REML and the REML EM algorithm into a cohesive framework.  相似文献   

13.
In clinical practice, the profile of each subject's CD4 response from a longitudinal study may follow a ‘broken stick’ like trajectory, indicating multiple phases of increase and/or decline in response. Such multiple phases (changepoints) may be important indicators to help quantify treatment effect and improve management of patient care. Although it is a common practice to analyze complex AIDS longitudinal data using nonlinear mixed-effects (NLME) or nonparametric mixed-effects (NPME) models in the literature, NLME or NPME models become a challenge to estimate changepoint due to complicated structures of model formulations. In this paper, we propose a changepoint mixed-effects model with random subject-specific parameters, including the changepoint for the analysis of longitudinal CD4 cell counts for HIV infected subjects following highly active antiretroviral treatment. The longitudinal CD4 data in this study may exhibit departures from symmetry, may encounter missing observations due to various reasons, which are likely to be non-ignorable in the sense that missingness may be related to the missing values, and may be censored at the time of the subject going off study-treatment, which is a potentially informative dropout mechanism. Inferential procedures can be complicated dramatically when longitudinal CD4 data with asymmetry (skewness), incompleteness and informative dropout are observed in conjunction with an unknown changepoint. Our objective is to address the simultaneous impact of skewness, missingness and informative censoring by jointly modeling the CD4 response and dropout time processes under a Bayesian framework. The method is illustrated using a real AIDS data set to compare potential models with various scenarios, and some interested results are presented.  相似文献   

14.
Longitudinal count data with excessive zeros frequently occur in social, biological, medical, and health research. To model such data, zero-inflated Poisson (ZIP) models are commonly used, after separating zero and positive responses. As longitudinal count responses are likely to be serially correlated, such separation may destroy the underlying serial correlation structure. To overcome this problem recently observation- and parameter-driven modelling approaches have been proposed. In the observation-driven model, the response at a specific time point is modelled through the responses at previous time points after incorporating serial correlation. One limitation of the observation-driven model is that it fails to accommodate the presence of any possible over-dispersion, which frequently occurs in the count responses. This limitation is overcome in a parameter-driven model, where the serial correlation is captured through the latent process using random effects. We compare the results obtained by the two models. A quasi-likelihood approach has been developed to estimate the model parameters. The methodology is illustrated with analysis of two real life datasets. To examine model performance the models are also compared through a simulation study.  相似文献   

15.
Non-Gaussian outcomes are often modeled using members of the so-called exponential family. The Poisson model for count data falls within this tradition. The family in general, and the Poisson model in particular, are at the same time convenient since mathematically elegant, but in need of extension since often somewhat restrictive. Two of the main rationales for existing extensions are (1) the occurrence of overdispersion, in the sense that the variability in the data is not adequately captured by the model's prescribed mean-variance link, and (2) the accommodation of data hierarchies owing to, for example, repeatedly measuring the outcome on the same subject, recording information from various members of the same family, etc. There is a variety of overdispersion models for count data, such as, for example, the negative-binomial model. Hierarchies are often accommodated through the inclusion of subject-specific, random effects. Though not always, one conventionally assumes such random effects to be normally distributed. While both of these issues may occur simultaneously, models accommodating them at once are less than common. This paper proposes a generalized linear model, accommodating overdispersion and clustering through two separate sets of random effects, of gamma and normal type, respectively. This is in line with the proposal by Booth et al. (Stat Model 3:179-181, 2003). The model extends both classical overdispersion models for count data (Breslow, Appl Stat 33:38-44, 1984), in particular the negative binomial model, as well as the generalized linear mixed model (Breslow and Clayton, J Am Stat Assoc 88:9-25, 1993). Apart from model formulation, we briefly discuss several estimation options, and then settle for maximum likelihood estimation with both fully analytic integration as well as hybrid between analytic and numerical integration. The latter is implemented in the SAS procedure NLMIXED. The methodology is applied to data from a study in epileptic seizures.  相似文献   

16.
In survival analysis, time-dependent covariates are usually present as longitudinal data collected periodically and measured with error. The longitudinal data can be assumed to follow a linear mixed effect model and Cox regression models may be used for modelling of survival events. The hazard rate of survival times depends on the underlying time-dependent covariate measured with error, which may be described by random effects. Most existing methods proposed for such models assume a parametric distribution assumption on the random effects and specify a normally distributed error term for the linear mixed effect model. These assumptions may not be always valid in practice. In this article, we propose a new likelihood method for Cox regression models with error-contaminated time-dependent covariates. The proposed method does not require any parametric distribution assumption on random effects and random errors. Asymptotic properties for parameter estimators are provided. Simulation results show that under certain situations the proposed methods are more efficient than the existing methods.  相似文献   

17.
Abstract. Partially linear models are extensions of linear models to include a non-parametric function of some covariate. They have been found to be useful in both cross-sectional and longitudinal studies. This paper provides a convenient means to extend Cook's local influence analysis to the penalized Gaussian likelihood estimator that uses a smoothing spline as a solution to its non-parametric component. Insight is also provided into the interplay of the influence or leverage measures between the linear and the non-parametric components in the model. The diagnostics are applied to a mouthwash data set and a longitudinal hormone study with informative results.  相似文献   

18.
Count data analysis techniques have been developed in biological and medical research areas. In particular, zero-inflated versions of parametric count distributions have been used to model excessive zeros that are often present in these assays. The most common count distributions for analyzing such data are Poisson and negative binomial. However, a Poisson distribution can only handle equidispersed data and a negative binomial distribution can only cope with overdispersion. However, a Conway–Maxwell–Poisson (CMP) distribution [4] can handle a wide range of dispersion. We show, with an illustrative data set on next-generation sequencing of maize hybrids, that both underdispersion and overdispersion can be present in genomic data. Furthermore, the maize data set consists of clustered observations and, therefore, we develop inference procedures for a zero-inflated CMP regression that incorporates a cluster-specific random effect term. Unlike the Gaussian models, the underlying likelihood is computationally challenging. We use a numerical approximation via a Gaussian quadrature to circumvent this issue. A test for checking zero-inflation has also been developed in our setting. Finite sample properties of our estimators and test have been investigated by extensive simulations. Finally, the statistical methodology has been applied to analyze the maize data mentioned before.  相似文献   

19.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   

20.
The negative binomial (NB) model and the generalized Poisson (GP) model are common alternatives to Poisson models when overdispersion is present in the data. Having accounted for initial overdispersion, we may require further investigation as to whether there is evidence for zero-inflation in the data. Two score statistics are derived from the GP model for testing zero-inflation. These statistics, unlike Wald-type test statistics, do not require that we fit the more complex zero-inflated overdispersed models to evaluate zero-inflation. A simulation study illustrates that the developed score statistics reasonably follow a χ2 distribution and maintain the nominal level. Extensive simulation results also indicate the power behavior is different for including a continuous variable than a binary variable in the zero-inflation (ZI) part of the model. These differences are the basis from which suggestions are provided for real data analysis. Two practical examples are presented in this article. Results from these examples along with practical experience lead us to suggest performing the developed score test before fitting a zero-inflated NB model to the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号