首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dementia patients exhibit considerable heterogeneity in individual trajectories of cognitive decline, with some patients showing rapid decline following diagnoses while others exhibiting slower decline or remaining stable for several years. Dementia studies often collect longitudinal measures of multiple neuropsychological tests aimed to measure patients’ decline across a number of cognitive domains. We propose a multivariate finite mixture latent trajectory model to identify distinct longitudinal patterns of cognitive decline simultaneously in multiple cognitive domains, each of which is measured by multiple neuropsychological tests. EM algorithm is used for parameter estimation and posterior probabilities are used to predict latent class membership. We present results of a simulation study demonstrating adequate performance of our proposed approach and apply our model to the Uniform Data Set from the National Alzheimer's Coordinating Center to identify cognitive decline patterns among dementia patients.  相似文献   

2.
The topic of heterogeneity in the analysis of recurrent event data has received considerable attention recent times. Frailty models are widely employed in such situations as they allow us to model the heterogeneity through common random effect. In this paper, we introduce a shared frailty model for gap time distributions of recurrent events with multiple causes. The parameters of the model are estimated using EM algorithm. An extensive simulation study is used to assess the performance of the method. Finally, we apply the proposed model to a real-life data.  相似文献   

3.
This paper presents an alternative analysis approach to modeling data where a lower detection limit (LOD) and unobserved population heterogeneity exist in a longitudinal data set. Longitudinal data on viral loads in HIV/AIDS studies, for instance, show strong positive skewness and left-censoring. Normalizing such data using a logarithmic transformation seems to be unsuccessful. An alternative to such a transformation is to use a finite mixture model which is suitable for analyzing data which have skewed or multi-modal distributions. There is little work done to simultaneously take into account these features of longitudinal data. This paper develops a growth mixture Tobit model that deals with a LOD and heterogeneity among growth trajectories. The proposed methods are illustrated using simulated and real data from an AIDS clinical study.  相似文献   

4.
ABSTRACT

This paper derives models to analyse Cannabis offences count series from New South Wales, Australia. The data display substantial overdispersion as well as underdispersion for a subset, trend movement and population heterogeneity. To describe the trend dynamic in the data, the Poisson geometric process model is first adopted and is extended to the generalized Poisson geometric process model to capture both over- and underdispersion. By further incorporating mixture effect, the model accommodates population heterogeneity and enables classification of homogeneous units. The model is implemented using Markov chain Monte Carlo algorithms via the user-friendly WinBUGS software and its performance is evaluated through a simulation study.  相似文献   

5.
The unknown or unobservable risk factors in the survival analysis cause heterogeneity between individuals. Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times, the shared frailty models were suggested. The most common shared frailty model is a model in which frailty act multiplicatively on the hazard function. In this paper, we introduce the shared gamma frailty model and the inverse Gaussian frailty model with the reversed hazard rate. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We also apply the proposed models to the Australian twin data set and a better model is suggested.  相似文献   

6.
We propose a new way of constructing dynamic decision-making model and a non-likelihood based statistical approach for analyzing a new data type: longitudinal distribution data. This longitudinal data records a trajectory of an animal’s dynamic decision-making when continuously exploiting a relative large, but close environment. The ensemble of all hosts contained in the environment is postulated to constitute a manifold of species-specific fitness-gains at any time point, and traverses through two major distinct phases: abundance vs. scarcity of pristine hosts. As such a manifold provides the relative potentials to all possible hosts available for selection, we construct a phase-dependent dynamic decision-making mechanism in a form of a self-adaptive conditional probabilistic model. We devise a Minimum Sum of Chi-squared approach to simultaneously evaluate individual cognitive capability within the two distinct phases and address the validity of the manifold based dynamic decision-making model on the longitudinal distribution data. We analyze three real data sets of seed beetle Callosobruchus maculatus collected from three experimental designs with different degrees of resource heterogeneity. Our statistical inferences are shown to successfully resolve the behavioral ecology issue of whether animal adaptively employs a dynamic decision-making mechanism in response to gradual environmental changes.  相似文献   

7.
The purpose of this paper is to highlight some classic issues in the measurement of change and to show how contemporary solutions can be used to deal with some of these issues. Five classic issues will be raised here: (1) Separating individual changes from group differences; (2) options for incomplete longitudinal data over time, (3) options for nonlinear changes over time; (4) measurement invariance in studies of changes over time; and (5) new opportunities for modeling dynamic changes. For each issue we will describe the problem, and then review some contemporary solutions to these problems base on Structural Equation Models (SEM). We will fit these SEM to using existing panel data from the Health & Retirement Study (HRS) cognitive variables. This is not intended as an overly technical treatment, so only a few basic equations are presented, examples will be displayed graphically, and more complete references to the contemporary solutions will be given throughout.  相似文献   

8.
李锐等 《统计研究》2014,31(8):52-58
本文借鉴Heckman和Raj分析框架,结合苏州工业园区金保工程管理数据,构建了养老金制度由完全积累制向部分积累制变革的反事实微观评估模型。鉴于参保职工异质性,本文采用分位数分析,用“替代率增幅分布”代替“平均替代率增幅”,评估制度变革的个体福利损益;又引入均值回归以及分位数回归,构建“替代率增幅”与“工资”等影响因素的经济结构模型,估计Raj充分信息评估指标,评估制度变革的收入再分配效应。本文在方法上整合了精算模型和经济结构模型,拓展了传统的反事实政策评估方法;在实务上为政府充分利用金保工程管理数据、辅助决策社会保障提供了一种范式。  相似文献   

9.
Confirmatory factor analysis (CFA) model is a useful multivariate statistical tool for interpreting relationships between latent variables and manifest variables. Often statistical results based on a single CFA are seriously distorted when data set takes on heterogeneity. To address the heterogeneity resulting from the multivariate responses, we propose a Bayesian semiparametric modeling for CFA. The approach relies on using a prior over the space of mixing distributions with finite components. Blocked Gibbs sampler is implemented to cope with the posterior analysis. Results obtained from a simulation study and a real data set are presented to illustrate the methodology.  相似文献   

10.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   

11.
ABSTRACT

Standard econometric methods can overlook individual heterogeneity in empirical work, generating inconsistent parameter estimates in panel data models. We propose the use of methods that allow researchers to easily identify, quantify, and address estimation issues arising from individual slope heterogeneity. We first characterize the bias in the standard fixed effects estimator when the true econometric model allows for heterogeneous slope coefficients. We then introduce a new test to check whether the fixed effects estimation is subject to heterogeneity bias. The procedure tests the population moment conditions required for fixed effects to consistently estimate the relevant parameters in the model. We establish the limiting distribution of the test and show that it is very simple to implement in practice. Examining firm investment models to showcase our approach, we show that heterogeneity bias-robust methods identify cash flow as a more important driver of investment than previously reported. Our study demonstrates analytically, via simulations, and empirically the importance of carefully accounting for individual specific slope heterogeneity in drawing conclusions about economic behavior.  相似文献   

12.
We investigate the effect of unobserved heterogeneity in the context of the linear transformation model for censored survival data in the clinical trials setting. The unobserved heterogeneity is represented by a frailty term, with unknown distribution, in the linear transformation model. The bias of the estimate under the assumption of no unobserved heterogeneity when it truly is present is obtained. We also derive the asymptotic relative efficiency of the estimate of treatment effect under the incorrect assumption of no unobserved heterogeneity. Additionally we investigate the loss of power for clinical trials that are designed assuming the model without frailty when, in fact, the model with frailty is true. Numerical studies under a proportional odds model show that the loss of efficiency and the loss of power can be substantial when the heterogeneity, as embodied by a frailty, is ignored. An erratum to this article can be found at  相似文献   

13.
It is well known that heterogeneity between studies in a meta-analysis can be either caused by diversity, for example, variations in populations and interventions, or caused by bias, that is, variations in design quality and conduct of the studies. Heterogeneity that is due to bias is difficult to deal with. On the other hand, heterogeneity that is due to diversity is taken into account by a standard random-effects model. However, such a model generally assumes that heterogeneity does not vary according to study-level variables such as the size of the studies in the meta-analysis and the type of study design used. This paper develops models that allow for this type of variation in heterogeneity and discusses the properties of the resulting methods. The models are fitted using the maximum-likelihood method and by modifying the Paule–Mandel method. Furthermore, a real-world argument is given to support the assumption that the inter-study variance is inversely proportional to study size. Under this assumption, the corresponding random-effects method is shown to be connected with standard fixed-effect meta-analysis in a way that may well appeal to many clinicians. The models and methods that are proposed are applied to data from two large systematic reviews.  相似文献   

14.
Consider repeated event-count data from a sequence of exposures, during each of which a subject can experience some number of events, which is reported at ‘visits’ following each exposure. Within-subject heterogeneity not accounted for by visit-varying covariates is called ‘visit-level’ heterogeneity. Using generalized linear mixed models with log link for longitudinal Poisson regression, I model visit-level heterogeneity by cumulatively adding ‘disturbances’ to the random intercept of each subject over visits to create a ‘disturbed-random-intercept$rsquo; model. I also create a ‘disturbed-random-slope’ model, where the slope is over visits, and both intercept and slope are random but only the slope is disturbed. Simulation studies compare fixed-effect estimation for these models in data with 15 visits, large visit-level heterogeneity, and large multiplicative overdispersion. These studies show statistically significant superiority of the disturbed-random-intercept model. Examples with epidemiological data compare results of this model with those from other published models.  相似文献   

15.
In data sets that consist of a large number of clusters, a frequent goal of the analysis is to detect whether heterogeneity exists between clusters. A standard approach is to model the heterogeneity in the framework of a mixture model and to derive a score test to detect heterogeneity. The likelihood function, from which the score test derives, depends heavily on the assumed density of the response variable. This paper examines the robustness of the heterogeneity test to misspecification of this density function when there is homogeneity and shows that the test size can be far different from the nominal level.  相似文献   

16.
Summary.  A two-level regression mixture model is discussed and contrasted with the conventional two-level regression model. Simulated and real data shed light on the modelling alternatives. The real data analyses investigate gender differences in mathematics achievement from the US National Education Longitudinal Survey. The two-level regression mixture analyses show that unobserved heterogeneity should not be presupposed to exist only at level 2 at the expense of level 1. Both the simulated and the real data analyses show that level 1 heterogeneity in the form of latent classes can be mistaken for level 2 heterogeneity in the form of the random effects that are used in conventional two-level regression analysis. Because of this, mixture models have an important role to play in multilevel regression analyses. Mixture models allow heterogeneity to be investigated more fully, more correctly attributing different portions of the heterogeneity to the different levels.  相似文献   

17.

Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed in the literature which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject’s event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose a Bayesian model which loosens assumptions on the mixing distribution inherent in the random effects FHT models currently in use. We demonstrate via simulation study that the proposed model greatly improves both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodology to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with both the Cox model and two popular FHT models.

  相似文献   

18.
In this research, we provide a new method to estimate discrete choice models with unobserved heterogeneity that can be used with either cross-sectional or panel data. The method imposes nonparametric assumptions on the systematic subutility functions and on the distributions of the unobservable random vectors and the heterogeneity parameter. The estimators are computationally feasible and strongly consistent. We provide an empirical application of the estimator to a model of store format choice. The key insights from the empirical application are: (1) consumer response to cost and distance contains interactions and nonlinear effects, which implies that a model without these effects tends to bias the estimated elasticities and heterogeneity distribution, and (2) the increase in likelihood for adding nonlinearities is similar to the increase in likelihood for adding heterogeneity, and this increase persists as heterogeneity is included in the model.  相似文献   

19.
With the aim of identifying the age of onset of change in the rate of cognitive decline while accounting for the missing observations, we considered a selection modelling framework. A random change point model was fitted to data from a population-based longitudinal study of ageing (the Cambridge City over 75 Cohort Study) to model the longitudinal process. A missing at random mechanism was modelled using logistic regression. Random effects such as initial cognitive status, rate of decline before and after the change point, and the age of onset of change in rate of decline were estimated after adjustment for risk factors for cognitive decline. Among other possible predictors, the last observed cognitive score was used to adjust the probability of death and dropout. Individuals who experienced less variability in their cognitive scores experienced a change in their rate of decline at older ages than individuals whose cognitive scores varied more.  相似文献   

20.
In this article we consider nonparametric estimation of a structural equation model under full additivity constraint. We propose estimators for both the conditional mean and gradient which are consistent, asymptotically normal, oracle efficient, and free from the curse of dimensionality. Monte Carlo simulations support the asymptotic developments. We employ a partially linear extension of our model to study the relationship between child care and cognitive outcomes. Some of our (average) results are consistent with the literature (e.g., negative returns to child care when mothers have higher levels of education). However, as our estimators allow for heterogeneity both across and within groups, we are able to contradict many findings in the literature (e.g., we do not find any significant differences in returns between boys and girls or for formal versus informal child care). Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号