首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Longitudinal data are commonly modeled with the normal mixed-effects models. Most modeling methods are based on traditional mean regression, which results in non robust estimation when suffering extreme values or outliers. Median regression is also not a best choice to estimation especially for non normal errors. Compared to conventional modeling methods, composite quantile regression can provide robust estimation results even for non normal errors. In this paper, based on a so-called pseudo composite asymmetric Laplace distribution (PCALD), we develop a Bayesian treatment to composite quantile regression for mixed-effects models. Furthermore, with the location-scale mixture representation of the PCALD, we establish a Bayesian hierarchical model and achieve the posterior inference of all unknown parameters and latent variables using Markov Chain Monte Carlo (MCMC) method. Finally, this newly developed procedure is illustrated by some Monte Carlo simulations and a case analysis of HIV/AIDS clinical data set.  相似文献   

2.
Zero inflated Poisson regression is a model commonly used to analyze data with excessive zeros. Although many models have been developed to fit zero-inflated data, most of them strongly depend on the special features of the individual data. For example, there is a need for new models when dealing with truncated and inflated data. In this paper, we propose a new model that is sufficiently flexible to model inflation and truncation simultaneously, and which is a mixture of a multinomial logistic and a truncated Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts. The truncated Poisson regression models the counts that are assumed to follow a truncated Poisson distribution. The performance of our proposed model is evaluated through simulation studies, and our model is found to have the smallest mean absolute error and best model fit. In the empirical example, the data are truncated with inflated values of zero and fourteen, and the results show that our model has a better fit than the other competing models.  相似文献   

3.
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale.  相似文献   

4.
In this paper, we propose the use of Bayesian quantile regression for the analysis of proportion data. We also consider the case when the data present a zero-or-one inflation using a two-part model approach. For the latter scheme, we assume that the response variable is generated by a mixed discrete–continuous distribution with a point mass at zero or one. Quantile regression is then used to explain the conditional distribution of the continuous part between zero and one, while the mixture probability is also modelled as a function of the covariates. We check the performance of these models with two simulation studies. We illustrate the method with data about the proportion of households with access to electricity in Brazil.  相似文献   

5.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   

6.
Tweedie regression models (TRMs) provide a flexible family of distributions to deal with non-negative right-skewed data and can handle continuous data with probability mass at zero. Estimation and inference of TRMs based on the maximum likelihood (ML) method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting TRMs, namely quasi-likelihood (QML) and pseudo-likelihood (PML). We discuss their asymptotic properties and perform simulation studies to compare our methods with the ML method. We show that the QML method provides asymptotically efficient estimation for regression parameters. Simulation studies showed that the QML and PML approaches present estimates, standard errors and coverage rates similar to the ML method. Furthermore, the second-moment assumptions required by the QML and PML methods enable us to extend the TRMs to the class of quasi-TRMs in Wedderburn's style. It allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide an R implementation and illustrate the application of TRMs using three data sets.  相似文献   

7.
In statistical analysis, particularly in econometrics, it is usual to consider regression models where the dependent variable is censored (limited). In particular, a censoring scheme to the left of zero is considered here. In this article, an extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution. In the context of maximum likelihood estimation, an expression for the expected information matrix is provided, and an efficient EM-type algorithm for the estimation of the model parameters is developed. In order to know what type of variables affect the income of housewives, the results and methods are applied to a real data set. A brief review on the normal censored regression model or Tobit model is also presented.  相似文献   

8.
Zero adjusted regression models are used to fit variables that are discrete at zero and continuous at some interval of the positive real numbers. Diagnostic analysis in these models is usually performed using the randomized quantile residual, which is useful for checking the overall adequacy of a zero adjusted regression model. However, it may fail to identify some outliers. In this work, we introduce a class of residuals for outlier identification in zero adjusted regression models. Monte Carlo simulation studies and two applications suggest that one of the residuals of the class introduced here has good properties and detects outliers that are not identified by the randomized quantile residual.  相似文献   

9.
The additive hazards model is one of the most commonly used regression models in the analysis of failure time data and many methods have been developed for its inference in various situations. However, no established estimation procedure exists when there are covariates with missing values and the observed responses are interval-censored; both types of complications arise in various settings including demographic, epidemiological, financial, medical and sociological studies. To address this deficiency, we propose several inverse probability weight-based and reweighting-based estimation procedures for the situation where covariate values are missing at random. The resulting estimators of regression model parameters are shown to be consistent and asymptotically normal. The numerical results that we report from a simulation study suggest that the proposed methods work well in practical situations. An application to a childhood cancer survival study is provided. The Canadian Journal of Statistics 48: 499–517; 2020 © 2020 Statistical Society of Canada  相似文献   

10.
In a calibration of near-infrared (NIR) instrument, we regress some chemical compositions of interest as a function of their NIR spectra. In this process, we have two immediate challenges: first, the number of variables exceeds the number of observations and, second, the multicollinearity between variables are extremely high. To deal with the challenges, prediction models that produce sparse solutions have recently been proposed. The term ‘sparse’ means that some model parameters are zero estimated and the other parameters are estimated naturally away from zero. In effect, a variable selection is embedded in the model to potentially achieve a better prediction. Many studies have investigated sparse solutions for latent variable models, such as partial least squares and principal component regression, and for direct regression models such as ridge regression (RR). However, in the latter, it mainly involves an L1 norm penalty to the objective function such as lasso regression. In this study, we investigate new sparse alternative models for RR within a random effects model framework, where we consider Cauchy and mixture-of-normals distributions on the random effects. The results indicate that the mixture-of-normals model produces a sparse solution with good prediction and better interpretation. We illustrate the methods using NIR spectra datasets from milk and corn specimens.  相似文献   

11.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

12.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

13.
This paper considers the point optimal tests for AR(1) errors in the linear regression model. It is shown that these tests have the same limiting power characteristics as the Durbin-Watson test. . The limiting power is zero or one when the regression has no intercept, but lies strictly between these values when an intercept is included.  相似文献   

14.
Studies of risk perceived using continuous scales of [0,100] were recently introduced in psychometrics, which can be transformed to the unit interval, but the presence of zeros or ones are commonly observed. Motivated by this, we introduce a full inferential set of tools that allows for augmented and limited data modeling. We considered parameter estimation, residual analysis, influence diagnostic and model selection for zero-and/or-one augmented beta rectangular (ZOABR) regression models and their particular nested models, which is based on a new parameterization of the beta rectangular distribution. Different from other alternatives, we performed maximum-likelihood estimation using a combination of the EM algorithm (for the continuous part) and Fisher scoring algorithm (for the discrete part). Also, we perform an additional step, by considering other link functions, besides the usual logistic link, for modeling the response mean. By considering randomized quantile residuals, (local) influence diagnostics and model selection tools, we identified that the ZOABR regression model is the best one. We also conducted extensive simulations studies, which indicate that all developed tools work properly. Finally, we discuss the use of this type of models to treat psychometric data. It is worthwhile to mention that applications of the developed methods go beyond to Psychometric data. Indeed, they can be useful when the response variable in bounded, including or not the respective limits.  相似文献   

15.
We compute the limiting povwer of the Durbin-Watson test in a general linear regression model Our treatment includes previous results due to W. Kramer and H, Zeisel as well as new results. In particular, we provide new insight under which conditions the limiting power is zero or one. Stochastic-simulations correspond to our investigations.  相似文献   

16.
Many areas of statistical modeling are plagued by the “curse of dimensionality,” in which there are more variables than observations. This is especially true when developing functional regression models where the independent dataset is some type of spectral decomposition, such as data from near-infrared spectroscopy. While we could develop a very complex model by simply taking enough samples (such that n > p), this could prove impossible or prohibitively expensive. In addition, a regression model developed like this could turn out to be highly inefficient, as spectral data usually exhibit high multicollinearity. In this article, we propose a two-part algorithm for selecting an effective and efficient functional regression model. Our algorithm begins by evaluating a subset of discrete wavelet transformations, allowing for variation in both wavelet and filter number. Next, we perform an intermediate processing step to remove variables with low correlation to the response data. Finally, we use the genetic algorithm to perform a stochastic search through the subset regression model space, driven by an information-theoretic objective function. We allow our algorithm to develop the regression model for each response variable independently, so as to optimally model each variable. We demonstrate our method on the familiar biscuit dough dataset, which has been used in a similar context by several researchers. Our results demonstrate both the flexibility and the power of our algorithm. For each response variable, a different subset model is selected, and different wavelet transformations are used. The models developed by our algorithm show an improvement, as measured by lower mean error, over results in the published literature.  相似文献   

17.
Screening procedures play an important role in data analysis, especially in high-throughput biological studies where the datasets consist of more covariates than independent subjects. In this article, a Bayesian screening procedure is introduced for the binary response models with logit and probit links. In contrast to many screening rules based on marginal information involving one or a few covariates, the proposed Bayesian procedure simultaneously models all covariates and uses closed-form screening statistics. Specifically, we use the posterior means of the regression coefficients as screening statistics; by imposing a generalized g-prior on the regression coefficients, we derive the analytical form of their posterior means and compute the screening statistics without Markov chain Monte Carlo implementation. We evaluate the utility of the proposed Bayesian screening method using simulations and real data analysis. When the sample size is small, the simulation results suggest improved performance with comparable computational cost.  相似文献   

18.
In linear regression the structure of the hat matrix plays an important part in regression diagnostics. In this note we investigate the properties of the hat matrix for regression with censored responses in the presence of one or more explanatory variables observed without censoring. The censored points in the scatterplot are renovated to positions had they been observed without censoring in a renovation process based on Buckley-James censored regression estimators. This allows natural links to be established with the structure of ordinary least squares estimators. In particular, we show that the renovated hat matrix may be partitioned in a manner which assists in deciding whether further explanatory variables should be added to the linear model. The added variable plot for regression with censored data is developed as a diagnostic tool for this decision process.  相似文献   

19.
In the linear regression model without an intercept, it is known that the limiting power of the Durbin-Watson test (as correlation among errors increases) equals either one or zero, depending on the underlying regressor matrix. This paper considers the limiting power in the model with an intercept, and proves that it will never equal one or zero.  相似文献   

20.
The associations in mortality of adult adoptees and their biological or adoptive parents have been studied in order to separate genetic and environmental influences. The 1003 Danish adoptees born 1924–26 have previously been analysed in a Cox regression model, using dichotomised versions of the parents’ lifetimes as covariates. This model will be referred to as the conditional Cox model, as it analyses lifetimes of adoptees conditional on parental lifetimes. Shared frailty models may be more satisfactory by using the entire observed lifetime of the parents. In a simulation study, sample size, distribution of lifetimes, truncation- and censoring patterns were chosen to illustrate aspects of the adoption dataset, and were generated from the conditional Cox model or a shared frailty model with gamma distributed frailties. First, efficiency was compared in the conditional Cox model and a shared frailty model, based on the conditional approach. For data with type 1 censoring the models showed no differences, whereas in data with random or no censoring, the models had different power in favour of the one from which data were generated. Secondly, estimation in the shared frailty model by a conditional approach or a two-stage copula approach was compared. Both approaches worked well, with no sign of dependence upon the truncation pattern, but some sign of bias depending on the censoring. For frailty parameters close to zero, we found bias when the estimation procedure used did not allow negative estimates. Based on this evaluation, we prefer to use frailty models allowing for negative frailty parameter estimates. The conclusions from earlier analyses of the adoption study were confirmed, though without greater precision than using the conditional Cox model. Analyses of associations between parental lifetimes are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号