首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
A common assumption in fitting panel data models is normality of stochastic subject effects. This can be extremely restrictive, making vague most potential features of true distributions. The objective of this article is to propose a modeling strategy, from a semi-parametric Bayesian perspective, to specify a flexible distribution for the random effects in dynamic panel data models. This is addressed here by assuming the Dirichlet process mixture model to introduce Dirichlet process prior for the random-effects distribution. We address the role of initial conditions in dynamic processes, emphasizing on joint modeling of start-up and subsequent responses. We adopt Gibbs sampling techniques to approximate posterior estimates. These important topics are illustrated by a simulation study and also by testing hypothetical models in two empirical contexts drawn from economic studies. We use modified versions of information criteria to compare the fitted models.  相似文献   

2.
We present a method for fitting parametric probability density models using an integrated square error criterion on a continuum of weighted Lebesgue spaces formed by ultraspherical polynomials. This approach is inherently suitable for creating mixture model representations of complex distributions and allows fully autonomous cluster analysis of high-dimensional datasets. The method is also suitable for extremely large sets, allowing post facto model selection and analysis even in the absence of the original data. Furthermore, the fitting procedure only requires the parametric model to be pointwise evaluable, making it trivial to fit user-defined models through a generic algorithm.  相似文献   

3.
Consider panel data modelled by a linear random intercept model that includes a time‐varying covariate. Suppose that our aim is to construct a confidence interval for the slope parameter. Commonly, a Hausman pretest is used to decide whether this confidence interval is constructed using the random effects model or the fixed effects model. This post‐model‐selection confidence interval has the attractive features that it (a) is relatively short when the random effects model is correct and (b) reduces to the confidence interval based on the fixed effects model when the data and the random effects model are highly discordant. However, this confidence interval has the drawbacks that (i) its endpoints are discontinuous functions of the data and (ii) its minimum coverage can be far below its nominal coverage probability. We construct a new confidence interval that possesses these attractive features, but does not suffer from these drawbacks. This new confidence interval provides an intermediate between the post‐model‐selection confidence interval and the confidence interval obtained by always using the fixed effects model. The endpoints of the new confidence interval are smooth functions of the Hausman test statistic, whereas the endpoints of the post‐model‐selection confidence interval are discontinuous functions of this statistic.  相似文献   

4.
We present a new class of models to fit longitudinal data, obtained with a suitable modification of the classical linear mixed-effects model. For each sample unit, the joint distribution of the random effect and the random error is a finite mixture of scale mixtures of multivariate skew-normal distributions. This extension allows us to model the data in a more flexible way, taking into account skewness, multimodality and discrepant observations at the same time. The scale mixtures of skew-normal form an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal, skew-Student-t, skew-slash and the skew-contaminated normal distributions as special cases, being a flexible alternative to the use of the corresponding symmetric distributions in this type of models. A simple efficient MCMC Gibbs-type algorithm for posterior Bayesian inference is employed. In order to illustrate the usefulness of the proposed methodology, two artificial and two real data sets are analyzed.  相似文献   

5.
Modelling count data with overdispersion and spatial effects   总被引:1,自引:1,他引:0  
In this paper we consider regression models for count data allowing for overdispersion in a Bayesian framework. We account for unobserved heterogeneity in the data in two ways. On the one hand, we consider more flexible models than a common Poisson model allowing for overdispersion in different ways. In particular, the negative binomial and the generalized Poisson (GP) distribution are addressed where overdispersion is modelled by an additional model parameter. Further, zero-inflated models in which overdispersion is assumed to be caused by an excessive number of zeros are discussed. On the other hand, extra spatial variability in the data is taken into account by adding correlated spatial random effects to the models. This approach allows for an underlying spatial dependency structure which is modelled using a conditional autoregressive prior based on Pettitt et al. in Stat Comput 12(4):353–367, (2002). In an application the presented models are used to analyse the number of invasive meningococcal disease cases in Germany in the year 2004. Models are compared according to the deviance information criterion (DIC) suggested by Spiegelhalter et al. in J R Stat Soc B64(4):583–640, (2002) and using proper scoring rules, see for example Gneiting and Raftery in Technical Report no. 463, University of Washington, (2004). We observe a rather high degree of overdispersion in the data which is captured best by the GP model when spatial effects are neglected. While the addition of spatial effects to the models allowing for overdispersion gives no or only little improvement, spatial Poisson models with spatially correlated or uncorrelated random effects are to be preferred over all other models according to the considered criteria.  相似文献   

6.
Tweedie regression models (TRMs) provide a flexible family of distributions to deal with non-negative right-skewed data and can handle continuous data with probability mass at zero. Estimation and inference of TRMs based on the maximum likelihood (ML) method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting TRMs, namely quasi-likelihood (QML) and pseudo-likelihood (PML). We discuss their asymptotic properties and perform simulation studies to compare our methods with the ML method. We show that the QML method provides asymptotically efficient estimation for regression parameters. Simulation studies showed that the QML and PML approaches present estimates, standard errors and coverage rates similar to the ML method. Furthermore, the second-moment assumptions required by the QML and PML methods enable us to extend the TRMs to the class of quasi-TRMs in Wedderburn's style. It allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide an R implementation and illustrate the application of TRMs using three data sets.  相似文献   

7.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

8.
Since the pioneering work by Koenker and Bassett [27], quantile regression models and its applications have become increasingly popular and important for research in many areas. In this paper, a random effects ordinal quantile regression model is proposed for analysis of longitudinal data with ordinal outcome of interest. An efficient Gibbs sampling algorithm was derived for fitting the model to the data based on a location-scale mixture representation of the skewed double-exponential distribution. The proposed approach is illustrated using simulated data and a real data example. This is the first work to discuss quantile regression for analysis of longitudinal data with ordinal outcome.  相似文献   

9.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

10.
《统计学通讯:理论与方法》2012,41(16-17):3278-3300
Under complex survey sampling, in particular when selection probabilities depend on the response variable (informative sampling), the sample and population distributions are different, possibly resulting in selection bias. This article is concerned with this problem by fitting two statistical models, namely: the variance components model (a two-stage model) and the fixed effects model (a single-stage model) for one-way analysis of variance, under complex survey design, for example, two-stage sampling, stratification, and unequal probability of selection, etc. Classical theory underlying the use of the two-stage model involves simple random sampling for each of the two stages. In such cases the model in the sample, after sample selection, is the same as model for the population; before sample selection. When the selection probabilities are related to the values of the response variable, standard estimates of the population model parameters may be severely biased, leading possibly to false inference. The idea behind the approach is to extract the model holding for the sample data as a function of the model in the population and of the first order inclusion probabilities. And then fit the sample model, using analysis of variance, maximum likelihood, and pseudo maximum likelihood methods of estimation. The main feature of the proposed techniques is related to their behavior in terms of the informativeness parameter. We also show that the use of the population model that ignores the informative sampling design, yields biased model fitting.  相似文献   

11.
Double hierarchical generalized linear models (with discussion)   总被引:2,自引:0,他引:2  
Summary.  We propose a class of double hierarchical generalized linear models in which random effects can be specified for both the mean and dispersion. Heteroscedasticity between clusters can be modelled by introducing random effects in the dispersion model, as is heterogeneity between clusters in the mean model. This class will, among other things, enable models with heavy-tailed distributions to be explored, providing robust estimation against outliers. The h -likelihood provides a unified framework for this new class of models and gives a single algorithm for fitting all members of the class. This algorithm does not require quadrature or prior probabilities.  相似文献   

12.
In many financial applications, Poisson mixture regression models are commonly used to analyze heterogeneous count data. When fitting these models, the observed counts are supposed to come from two or more subpopulations and parameter estimation is typically performed by means of maximum likelihood via the Expectation–Maximization algorithm. In this study, we discuss briefly the procedure for fitting Poisson mixture regression models by means of maximum likelihood, the model selection and goodness-of-fit tests. These models are applied to a real data set for credit-scoring purposes. We aim to reveal the impact of demographic and financial variables in creating different groups of clients and to predict the group to which each client belongs, as well as his expected number of defaulted payments. The model's conclusions are very interesting, revealing that the population consists of three groups, contrasting with the traditional good versus bad categorization approach of the credit-scoring systems.  相似文献   

13.
We propose a flexible semiparametric stochastic mixed effects model for bivariate cyclic longitudinal data. The model can handle either single cycle or, more generally, multiple consecutive cycle data. The approach models the mean of responses by parametric fixed effects and a smooth nonparametric function for the underlying time effects, and the relationship across the bivariate responses by a bivariate Gaussian random field and a joint distribution of random effects. The proposed model not only can model complicated individual profiles, but also allows for more flexible within-subject and between-response correlations. The fixed effects regression coefficients and the nonparametric time functions are estimated using maximum penalized likelihood, where the resulting estimator for the nonparametric time function is a cubic smoothing spline. The smoothing parameters and variance components are estimated simultaneously using restricted maximum likelihood. Simulation results show that the parameter estimates are close to the true values. The fit of the proposed model on a real bivariate longitudinal dataset of pre-menopausal women also performs well, both for a single cycle analysis and for a multiple consecutive cycle analysis. The Canadian Journal of Statistics 48: 471–498; 2020 © 2020 Statistical Society of Canada  相似文献   

14.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

15.
Existence of missing values is an inseparable part of longitudinal studies in epidemiology, medical and clinical studies. Usually researchers, for simplicity, ignore the missingness mechanism while, ignoring a not at random mechanism may lead to misleading results. In this paper, we use a Bayesian paradigm for fitting selection model of Heckman, which allows the non-ignorable missingness for longitudinal data. Also, we use reversible-jump Markov chain Monte Carlo to allow the model to choose between non-ignorable and ignorable structures for missingness mechanism, and show how the selection can be incorporated. Some simulation studies are performed for illustration of the proposed approach. The approach is also used for analyzing two real data sets.  相似文献   

16.
This paper presents a Bayesian method for the analysis of toxicological multivariate mortality data when the discrete mortality rate for each family of subjects at a given time depends on familial random effects and the toxicity level experienced by the family. Our aim is to model and analyse one set of such multivariate mortality data with large family sizes: the potassium thiocyanate (KSCN) tainted fish tank data of O'Hara Hines. The model used is based on a discretized hazard with additional time-varying familial random effects. A similar previous study (using sodium thiocyanate (NaSCN)) is used to construct a prior for the parameters in the current study. A simulation-based approach is used to compute posterior estimates of the model parameters and mortality rates and several other quantities of interest. Recent tools in Bayesian model diagnostics and variable subset selection have been incorporated to verify important modelling assumptions regarding the effects of time and heterogeneity among the families on the mortality rate. Further, Bayesian methods using predictive distributions are used for comparing several plausible models.  相似文献   

17.
We describe a selection model for multivariate counts, where association between the primary outcomes and the endogenous selection source is modeled through outcome-specific latent effects which are assumed to be dependent across equations. Parametric specifications of this model already exist in the literature; in this paper, we show how model parameters can be estimated in a finite mixture context. This approach helps us to consider overdispersed counts, while allowing for multivariate association and endogeneity of the selection variable. In this context, attention is focused both on bias in estimated effects when exogeneity of selection (treatment) variable is assumed, as well as on consistent estimation of the association between the random effects in the primary and in the treatment effect models, when the latter is assumed endogeneous. The model behavior is investigated through a large scale simulation experiment. An empirical example on health care utilization data is provided.  相似文献   

18.
Nonlinear mixed-effects models are very useful to analyze repeated measures data and are used in a variety of applications. Normal distributions for random effects and residual errors are usually assumed, but such assumptions make inferences vulnerable to the presence of outliers. In this work, we introduce an extension of a normal nonlinear mixed-effects model considering a subclass of elliptical contoured distributions for both random effects and residual errors. This elliptical subclass, the scale mixtures of normal (SMN) distributions, includes heavy-tailed multivariate distributions, such as Student-t, the contaminated normal and slash, among others, and represents an interesting alternative to outliers accommodation maintaining the elegance and simplicity of the maximum likelihood theory. We propose an exact estimation procedure to obtain the maximum likelihood estimates of the fixed-effects and variance components, using a stochastic approximation of the EM algorithm. We compare the performance of the normal and the SMN models with two real data sets.  相似文献   

19.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   

20.
Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号