首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

3.
In this study, estimation of the parameters of the zero-inflated count regression models and computations of posterior model probabilities of the log-linear models defined for each zero-inflated count regression models are investigated from the Bayesian point of view. In addition, determinations of the most suitable log-linear and regression models are investigated. It is known that zero-inflated count regression models cover zero-inflated Poisson, zero-inflated negative binomial, and zero-inflated generalized Poisson regression models. The classical approach has some problematic points but the Bayesian approach does not have similar flaws. This work points out the reasons for using the Bayesian approach. It also lists advantages and disadvantages of the classical and Bayesian approaches. As an application, a zoological data set, including structural and sampling zeros, is used in the presence of extra zeros. In this work, it is observed that fitting a zero-inflated negative binomial regression model creates no problems at all, even though it is known that fitting a zero-inflated negative binomial regression model is the most problematic procedure in the classical approach. Additionally, it is found that the best fitting model is the log-linear model under the negative binomial regression model, which does not include three-way interactions of factors.  相似文献   

4.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

5.
The paper proposes a Bayesian quantile regression method for hierarchical linear models. Existing approaches of hierarchical linear quantile regression models are scarce and most of them were not from the perspective of Bayesian thoughts, which is important for hierarchical models. In this paper, based on Bayesian theories and Markov Chain Monte Carlo methods, we introduce Asymmetric Laplace distributed errors to simulate joint posterior distributions of population parameters and across-unit parameters and then derive their posterior quantile inferences. We run a simulation as the proposed method to examine the effects on parameters induced by units and quantile levels; the method is also applied to study the relationship between Chinese rural residents' family annual income and their cultivated areas. Both the simulation and real data analysis indicate that the method is effective and accurate.  相似文献   

6.
In this paper we deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate. We consider a cure rate model based on the negative binomial distribution, encompassing as a special case the promotion time cure model. Bayesian analysis is based on Markov chain Monte Carlo (MCMC) methods. We also present some discussion on model selection and an illustration with a real data set.  相似文献   

7.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

8.
Failure time models are considered when there is a subpopulation of individuals that is immune, or not susceptible, to an event of interest. Such models are of considerable interest in biostatistics. The most common approach is to postulate a proportion p of immunes or long-term survivors and to use a mixture model [5]. This paper introduces the defective inverse Gaussian model as a cure model and examines the use of the Gibbs sampler together with a data augmentation algorithm to study Bayesian inferences both for the cured fraction and the regression parameters. The results of the Bayesian and likelihood approaches are illustrated on two real data sets.  相似文献   

9.
The multinomial logistic regression model (MLRM) can be interpreted as a natural extension of the binomial model with logit link function to situations where the response variable can have three or more possible outcomes. In addition, when the categories of the response variable are nominal, the MLRM can be expressed in terms of two or more logistic models and analyzed in both frequentist and Bayesian approaches. However, few discussions about post modeling in categorical data models are found in the literature, and they mainly use Bayesian inference. The objective of this work is to present classic and Bayesian diagnostic measures for categorical data models. These measures are applied to a dataset (status) of patients undergoing kidney transplantation.  相似文献   

10.
Measurement error is a commonly addressed problem in psychometrics and the behavioral sciences, particularly where gold standard data either does not exist or are too expensive. The Bayesian approach can be utilized to adjust for the bias that results from measurement error in tests. Bayesian methods offer other practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. In this paper we consider a logistic regression model where both the response and a binary covariate are subject to misclassification. We assume both a continuous measure and a binary diagnostic test are available for the response variable but no gold standard test is assumed available. We consider a fully Bayesian analysis that affords such adjustments, accounting for the sources of error and correcting estimates of the regression parameters. Based on the results from our example and simulations, the models that account for misclassification produce more statistically significant results, than the models that ignore misclassification. A real data example on math disorders is considered.  相似文献   

11.
Panel studies are statistical studies in which two or more variables are observed for two or more subjects at two or more points In time. Cross- lagged panel studies are those studies in which the variables are continuous and divide naturally into two effects or impacts of each set of variables on the other. If a regression approach is taken5 a regression structure Is formulated for the cross-lagged models This structure may assume that the regression parameters are homogeneous across waves and across subpopulations. Under such assumptions the methods of multivariate regression analysis can be adapted to make inferences about the parameters. These inferences are limited to the degree that homogeneity of the parameters Is 'supported b}T the data. We consider the problem of testing the hypotheses of homogeneity and consider the problem of making statistical inferences about the cross-effects should there be evidence against one of the homogeneity assumptions. We demonstrate the methods developed by applying then to two panel data sets.  相似文献   

12.
More recently a large amount of interest has been devoted to the use of Bayesian methods for deriving parameter estimates of the stochastic frontier analysis. Bayesian stochastic frontier analysis (BSFA) seems to be a useful method to assess the efficiency in energy sector. However, BSFA results do not expose the multiple relationships between input and output variables and energy efficiency. This study proposes a framework to make inferences about BSFA efficiencies, recognizing the underlying relationships between variables and efficiency, using Bayesian network (BN) approach. BN classifiers are proposed as a method to analyze the results obtained from BSFA.  相似文献   

13.
Assessing the absolute risk for a future disease event in presently healthy individuals has an important role in the primary prevention of cardiovascular diseases (CVD) and other chronic conditions. In this paper, we study the use of non‐parametric Bayesian hazard regression techniques and posterior predictive inferences in the risk assessment task. We generalize our previously published Bayesian multivariate monotonic regression procedure to a survival analysis setting, combined with a computationally efficient estimation procedure utilizing case–base sampling. To achieve parsimony in the model fit, we allow for multidimensional relationships within specified subsets of risk factors, determined either on a priori basis or as a part of the estimation procedure. We apply the proposed methods for 10‐year CVD risk assessment in a Finnish population. © 2014 Board of the Foundation of the Scandinavian Journal of Statistics  相似文献   

14.
This article deals with a Bayesian predictive approach for two-stage sequential analyses in clinical trials, applied to both frequentist and Bayesian tests. We propose to make a predictive inference based on the notion of satisfaction index and the data accrued so far together with future data. The computations and the simulation results concern an inferential problem, related to the binomial model.  相似文献   

15.
We review Bayesian analysis of hierarchical non-standard Poisson regression models with an emphasis on microlevel heterogeneity and macrolevel autocorrelation. For the former case, we confirm that negative binomial regression usually accounts for microlevel heterogeneity (overdispersion) satisfactorily; for the latter case, we apply the simple first-order Markov transition model to conveniently capture the macrolevel autocorrelation which often arises from temporal and/or spatial count data, rather than attaching complex random effects directly to the regression parameters. Specifically, we extend the hierarchical (multilevel) Poisson model into negative binomial models with macrolevel autocorrelation using restricted gamma mixture with unit mean and Markov transition covariate created from preceding residuals. We prove a mild sufficient condition for posterior propriety under flat prior for the interesting fixed effects. Our methodology is implemented by analyzing the Baltic sea peracarids diurnal activity data published in the marine biology and ecology literature.  相似文献   

16.
This article considers explicit and detailed theoretical and empirical Bayesian analysis of the well-known Poisson regression model for count data with unobserved individual effects based on the lognormal, rather than the popular negative binomial distribution. Although the negative binomial distribution leads to analytical expressions for the likelihood function, a Poisson-lognormal model is closer to the concept of regression with normally distributed innovations, and accounts for excess zeros as well. Such models have been considered widely in the literature (Winkelmann, 2008 Winkelmann , R. ( 2008 ). Econometric Analysis of Count Data. , 5th ed. Berlin : Springer . [Google Scholar]). The article also provides the necessary theoretical results regarding the posterior distribution of the model. Given that the likelihood function involves integrals with respect to the latent variables, numerical methods organized around Gibbs sampling with data augmentation are proposed for likelihood analysis of the model. The methods are applied to the patent-R&D relationship of 70 US pharmaceutical and biomedical companies, and it is found that it performs better than Poisson regression or negative binomial regression models.  相似文献   

17.
In this article, we consider Bayesian inferences for the heteroscedastic nonparametric regression models, when both the mean function and variance function are unknown. We demonstrated consistency of posterior distributions for this model using priors induced by B-splines expansion, treating both random and deterministic covariates in a uniform manner.  相似文献   

18.
It is common to fit generalized linear models with binomial and Poisson responses, where the data show a variability that is greater than the theoretical variability assumed by the model. This phenomenon, known as overdispersion, may spoil inferences about the model by considering significant parameters associated with variables that have no significant effect on the dependent variable. This paper explains some methods to detect overdispersion and presents and evaluates three well-known methodologies that have shown their usefulness in correcting this problem, using random mean models, quasi-likelihood methods and a double exponential family. In addition, it proposes some new Bayesian model extensions that have proved their usefulness in correcting the overdispersion problem. Finally, using the information provided by the National Demographic and Health Survey 2005, the departmental factors that have an influence on the mortality of children under 5 years and female postnatal period screening are determined. Based on the results, extensions that generalize some of the aforementioned models are also proposed, and their use is motivated by the data set under study. The results conclude that the proposed overdispersion models provide a better statistical fit of the data.  相似文献   

19.
The sensitivity of-a Bayesian inference to prior assumptions is examined by Monte Carlo simulation for the beta-binomial conjugate family of distributions. Results for the effect on a Bayesian probability interval of the binomial parameter indicate that the Bayesian inference is for the most part quite sensitive to misspecification of the prior distribution. The magnitude of the sensitivity depends primarily on the difference of assigned means and variances from the respective means and variances of the actually-sampled prior distributions. The effect of a disparity in form between the assigned prior and actually-sampled distributions was less important for the cases tested.  相似文献   

20.
Overdispersion has been a common phenomenon in count data and usually treated with the negative binomial model. This paper shows that measurement errors in covariates in general also lead to overdispersion on the observed data if the true data generating process is indeed the Poisson regression. This kind of overdispersion cannot be treated using the negative binomial model, as otherwise, biases will occur. To provide consistent estimates, we propose a new type of corrected score estimator assuming that the distribution of the latent variables is known. The consistency and asymptotic normality of the proposed estimator are established. Simulation results show that this estimator has good finite sample performance. We also illustrate that the Akaike information criterion and Bayesian information criterion work well for selecting the correct model if the true model is the errors-in-variables Poisson regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号