首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Random effects models have been playing a critical role for modelling longitudinal data. However, there are little studies on the kernel-based maximum likelihood method for semiparametric random effects models. In this paper, based on kernel and likelihood methods, we propose a pooled global maximum likelihood method for the partial linear random effects models. The pooled global maximum likelihood method employs the local approximations of the nonparametric function at a group of grid points simultaneously, instead of one point. Gaussian quadrature is used to approximate the integration of likelihood with respect to random effects. The asymptotic properties of the proposed estimators are rigorously studied. Simulation studies are conducted to demonstrate the performance of the proposed approach. We also apply the proposed method to analyse correlated medical costs in the Medical Expenditure Panel Survey data set.  相似文献   

2.
This paper addresses the problem of simultaneous variable selection and estimation in the random-intercepts model with the first-order lag response. This type of model is commonly used for analyzing longitudinal data obtained through repeated measurements on individuals over time. This model uses random effects to cover the intra-class correlation, and the first lagged response to address the serial correlation, which are two common sources of dependency in longitudinal data. We demonstrate that the conditional likelihood approach by ignoring correlation among random effects and initial responses can lead to biased regularized estimates. Furthermore, we demonstrate that joint modeling of initial responses and subsequent observations in the structure of dynamic random-intercepts models leads to both consistency and Oracle properties of regularized estimators. We present theoretical results in both low- and high-dimensional settings and evaluate regularized estimators' performances by conducting simulation studies and analyzing a real dataset. Supporting information is available online.  相似文献   

3.
A random-effects transition model is proposed to model the economic activity status of household members. This model is introduced to take into account two kinds of correlations; one due to the longitudinal nature of the study, which will be considered using a transition parameter, and the other due to the existing correlation between responses of members of the same household which is taken into account by introducing random coefficients into the model. The results are presented based on the homogeneous (all parameters are not changed by time) and non-homogeneous Markov models with random coefficients. A Bayesian approach via the Gibbs sampling is used to perform parameter estimation. Results of using random-effects transition model are compared, using deviance information criterion, with those of three other models which exclude random effects and/or transition effects. It is shown that the full model gains more precision due to the consideration of all aspects of the process which generated the data. To illustrate the utility of the proposed model, a longitudinal data set which is extracted from the Iranian Labour Force Survey is analysed to explore the simultaneous effect of some covariates on the current economic activity as a nominal response. Also, some sensitivity analyses are performed to assess the robustness of the posterior estimation of the transition parameters to the perturbations of the prior parameters.  相似文献   

4.
The shared-parameter model and its so-called hierarchical or random-effects extension are widely used joint modeling approaches for a combination of longitudinal continuous, binary, count, missing, and survival outcomes that naturally occurs in many clinical and other studies. A random effect is introduced and shared or allowed to differ between two or more repeated measures or longitudinal outcomes, thereby acting as a vehicle to capture association between the outcomes in these joint models. It is generally known that parameter estimates in a linear mixed model (LMM) for continuous repeated measures or longitudinal outcomes allow for a marginal interpretation, even though a hierarchical formulation is employed. This is not the case for the generalized linear mixed model (GLMM), that is, for non-Gaussian outcomes. The aforementioned joint models formulated for continuous and binary or two longitudinal binomial outcomes, using the LMM and GLMM, will naturally have marginal interpretation for parameters associated with the continuous outcome but a subject-specific interpretation for the fixed effects parameters relating covariates to binary outcomes. To derive marginally meaningful parameters for the binary models in a joint model, we adopt the marginal multilevel model (MMM) due to Heagerty [13] and Heagerty and Zeger [14] and formulate a joint MMM for two longitudinal responses. This enables to (1) capture association between the two responses and (2) obtain parameter estimates that have a population-averaged interpretation for both outcomes. The model is applied to two sets of data. The results are compared with those obtained from the existing approaches such as generalized estimating equations, GLMM, and the model of Heagerty [13]. Estimates were found to be very close to those from single analysis per outcome but the joint model yields higher precision and allows for quantifying the association between outcomes. Parameters were estimated using maximum likelihood. The model is easy to fit using available tools such as the SAS NLMIXED procedure.  相似文献   

5.
Abstract

The locally weighted censored quantile regression approach is proposed for panel data models with fixed effects, which allows for random censoring. The resulting estimators are obtained by employing the fixed effects quantile regression method. The weights are selected either parametrically, semi-parametrically or non-parametrically. The large panel data asymptotics are used in an attempt to cope with the incidental parameter problem. The consistency and limiting distribution of the proposed estimator are also derived. The finite sample performance of the proposed estimators are examined via Monte Carlo simulations.  相似文献   

6.
Random effects models are considered for count data obtained in a cross or nested classification. The main feature of the proposed models is the use of the additive effects on the original scale in contrast to the commonly used log scale. The rationale behind this approach is given. The estimation of variance components is based on the usual mean square approach. Directly analogous results to those from the analysis of variance models for continuous data are obtained. The usual Poisson dispersion test procedure can be used not only to test for no overall random effects but also to assess the adequacy of the model. Individual variance component can be tested by using the usual F-test. To get a reliable estimate, a large number of factor levels seem to be required.  相似文献   

7.
Many research fields increasingly involve analyzing data of a complex structure. Models investigating the dependence of a response on a predictor have moved beyond the ordinary scalar-on-vector regression. We propose a regression model for a scalar response and a surface (or a bivariate function) predictor. The predictor has a random component and the regression model falls in the framework of linear random effects models. We estimate the model parameters via maximizing the log-likelihood with the ECME (Expectation/Conditional Maximization Either) algorithm. We use the approach to analyze a data set where the response is the neuroticism score and the predictor is the resting-state brain function image. In the simulations we tried, the approach has better performance than two other approaches, a functional principal component regression approach and a smooth scalar-on-image regression approach.  相似文献   

8.
The computation in the multinomial logit mixed effects model is costly especially when the response variable has a large number of categories, since it involves high-dimensional integration and maximization. Tsodikov and Chefo (2008) developed a stable MLE approach to problems with independent observations, based on generalized self-consistency and quasi-EM algorithm developed in Tsodikov (2003). In this paper, we apply the idea to clustered multinomial response to simplify the maximization step. The method transforms the complex multinomial likelihood to Poisson-type likelihood and hence allows for the estimates to be obtained iteratively solving a set of independent low-dimensional problems. The methodology is applied to real data and studied by simulations. While maximization is simplified, numerical integration remains the dominant challenge to computational efficiency.  相似文献   

9.
The omission of important variables is a well‐known model specification issue in regression analysis and mixed linear models. The author considers longitudinal data models that are special cases of the mixed linear models; in particular, they are linear models of repeated observations on a subject. Models of omitted variables have origins in both the econometrics and biostatistics literatures. The author describes regression coefficient estimators that are robust to and that provide the basis for detecting the influence of certain types of omitted variables. New robust estimators and omitted variable tests are introduced and illustrated with a case study that investigates the determinants of tax liability.  相似文献   

10.
Remote sensing is a helpful tool for crop monitoring or vegetation-growth estimation at a country or regional scale. However, satellite images generally have to cope with a compromise between the time frequency of observations and their resolution (i.e. pixel size). When concerned with high temporal resolution, we have to work with information on the basis of kilometric pixels, named mixed pixels, that represent aggregated responses of multiple land cover. Disaggreggation or unmixing is then necessary to downscale from the square kilometer to the local dynamic of each theme (crop, wood, meadows, etc.).

Assuming the land use is known, that is to say the proportion of each theme within each mixed pixel, we propose to address the downscaling issue through the generalization of varying-time regression models for longitudinal data and/or functional data by introducing random individual effects. The estimators are built by expanding the mixed pixels trajectories with B-splines functions and maximizing the log-likelihood with a backfitting-ECME algorithm. A BLUP formula allows then to get the ‘best possible’ estimations of the local temporal responses of each crop when observing mixed pixels trajectories. We show that this model has many potential applications in remote sensing, and an interesting one consists of coupling high and low spatial resolution images in order to perform temporal interpolation of high spatial resolution images (20 m), increasing the knowledge on particular crops in very precise locations.

The unmixing and temporal high-resolution interpolation approaches are illustrated on remote-sensing data obtained on the South-Western France during the year 2002.  相似文献   


11.
In this paper, we consider the non-penalty shrinkage estimation method of random effect models with autoregressive errors for longitudinal data when there are many covariates and some of them may not be active for the response variable. In observational studies, subjects are followed over equally or unequally spaced visits to determine the continuous response and whether the response is associated with the risk factors/covariates. Measurements from the same subject are usually more similar to each other and thus are correlated with each other but not with observations of other subjects. To analyse this data, we consider a linear model that contains both random effects across subjects and within-subject errors that follows autoregressive structure of order 1 (AR(1)). Considering the subject-specific random effect as a nuisance parameter, we use two competing models, one includes all the covariates and the other restricts the coefficients based on the auxiliary information. We consider the non-penalty shrinkage estimation strategy that shrinks the unrestricted estimator in the direction of the restricted estimator. We discuss the asymptotic properties of the shrinkage estimators using the notion of asymptotic biases and risks. A Monte Carlo simulation study is conducted to examine the relative performance of the shrinkage estimators with the unrestricted estimator when the shrinkage dimension exceeds two. We also numerically compare the performance of the shrinkage estimators to that of the LASSO estimator. A longitudinal CD4 cell count data set will be used to illustrate the usefulness of shrinkage and LASSO estimators.  相似文献   

12.
This article extends a random preventive maintenance scheme, called repair alert model, when there exist environmental variables that effect on system lifetimes. It can be used for implementing age-dependent maintenance policies on engineering devices. In other words, consider a device that works for a job and is subject to failure at a random time X, and the maintenance crew can avoid the failure by a possible replacement at some random time Z. The new model is flexible to including covariates with both fixed and random effects. The problem of estimating parameters is also investigated in details. Here, the observations are in the form of random signs censoring data (RSCD) with covariates. Therefore, this article generalizes derived statistical inferences on the basis of RSCD albeit without covariates in past literature. To do this, it is assumed that the system lifetime distribution belongs to the log-location-scale family of distributions. A real dataset is also analyzed on basis of the results obtained.  相似文献   

13.
The authors develop a Markov model for the analysis of longitudinal categorical data which facilitates modelling both marginal and conditional structures. A likelihood formulation is employed for inference, so the resulting estimators enjoy the optimal properties such as efficiency and consistency, and remain consistent when data are missing at random. Simulation studies demonstrate that the proposed method performs well under a variety of situations. Application to data from a smoking prevention study illustrates the utility of the model and interpretation of covariate effects. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

14.
The joint models for longitudinal data and time-to-event data have recently received numerous attention in clinical and epidemiologic studies. Our interest is in modeling the relationship between event time outcomes and internal time-dependent covariates. In practice, the longitudinal responses often show non linear and fluctuated curves. Therefore, the main aim of this paper is to use penalized splines with a truncated polynomial basis to parameterize the non linear longitudinal process. Then, the linear mixed-effects model is applied to subject-specific curves and to control the smoothing. The association between the dropout process and longitudinal outcomes is modeled through a proportional hazard model. Two types of baseline risk functions are considered, namely a Gompertz distribution and a piecewise constant model. The resulting models are referred to as penalized spline joint models; an extension of the standard joint models. The expectation conditional maximization (ECM) algorithm is applied to estimate the parameters in the proposed models. To validate the proposed algorithm, extensive simulation studies were implemented followed by a case study. In summary, the penalized spline joint models provide a new approach for joint models that have improved the existing standard joint models.  相似文献   

15.
Generalized linear models with random effects and/or serial dependence are commonly used to analyze longitudinal data. However, the computation and interpretation of marginal covariate effects can be difficult. This led Heagerty (1999, 2002) to propose models for longitudinal binary data in which a logistic regression is first used to explain the average marginal response. The model is then completed by introducing a conditional regression that allows for the longitudinal, within‐subject, dependence, either via random effects or regressing on previous responses. In this paper, the authors extend the work of Heagerty to handle multivariate longitudinal binary response data using a triple of regression models that directly model the marginal mean response while taking into account dependence across time and across responses. Markov Chain Monte Carlo methods are used for inference. Data from the Iowa Youth and Families Project are used to illustrate the methods.  相似文献   

16.
We propose a joint modeling likelihood-based approach for studies with repeated measures and informative right censoring. Joint modeling of longitudinal and survival data are common approaches but could result in biased estimates if proportionality of hazards is violated. To overcome this issue, and given that the exact time of dropout is typically unknown, we modeled the censoring time as the number of follow-up visits and extended it to be dependent on selected covariates. Longitudinal trajectories for each subject were modeled to provide insight into disease progression and incorporated with the number follow-up visits in one likelihood function.  相似文献   

17.
The main difficulty in parametric analysis of longitudinal data lies in specifying covariance structure. Several covariance structures, which usually reflect one series of measurements collected over time, have been presented in the literature. However there is a lack of literature on covariance structures designed for repeated measures specified by more than one repeated factor. In this paper a new, general method of modelling covariance structure based on the Kronecker product of underlying factor specific covariance profiles is presented. The method has an attractive interpretation in terms of independent factor specific contribution to overall within subject covariance structure and can be easily adapted to standard software.  相似文献   

18.
In a calibration of near-infrared (NIR) instrument, we regress some chemical compositions of interest as a function of their NIR spectra. In this process, we have two immediate challenges: first, the number of variables exceeds the number of observations and, second, the multicollinearity between variables are extremely high. To deal with the challenges, prediction models that produce sparse solutions have recently been proposed. The term ‘sparse’ means that some model parameters are zero estimated and the other parameters are estimated naturally away from zero. In effect, a variable selection is embedded in the model to potentially achieve a better prediction. Many studies have investigated sparse solutions for latent variable models, such as partial least squares and principal component regression, and for direct regression models such as ridge regression (RR). However, in the latter, it mainly involves an L1 norm penalty to the objective function such as lasso regression. In this study, we investigate new sparse alternative models for RR within a random effects model framework, where we consider Cauchy and mixture-of-normals distributions on the random effects. The results indicate that the mixture-of-normals model produces a sparse solution with good prediction and better interpretation. We illustrate the methods using NIR spectra datasets from milk and corn specimens.  相似文献   

19.
Summary.  We present a multivariate logistic regression model for the joint analysis of longitudinal multiple-source binary data. Longitudinal multiple-source binary data arise when repeated binary measurements are obtained from two or more sources, with each source providing a measure of the same underlying variable. Since the number of responses on each subject is relatively large, the empirical variance estimator performs poorly and cannot be relied on in this setting. Two methods for obtaining a parsimonious within-subject association structure are considered. An additional complication arises with estimation, since maximum likelihood estimation may not be feasible without making unrealistically strong assumptions about third- and higher order moments. To circumvent this, we propose the use of a generalized estimating equations approach. Finally, we present an analysis of multiple-informant data obtained longitudinally from a psychiatric interventional trial that motivated the model developed in the paper.  相似文献   

20.
In practice, data are often measured repeatedly on the same individual at several points in time. Main interest often relies in characterizing the way the response changes in time, and the predictors of that change. Marginal, mixed and transition are frequently considered to be the main models for continuous longitudinal data analysis. These approaches are proposed primarily for balanced longitudinal design. However, in clinic studies, data are usually not balanced and some restrictions are necessary in order to use these models. This paper was motivated by a data set related to longitudinal height measurements in children of HIV-infected mothers that was recorded at the university hospital of the Federal University in Minas Gerais, Brazil. This data set is severely unbalanced. The goal of this paper is to assess the application of continuous longitudinal models for the analysis of unbalanced data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号