首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Model uncertainty has become a central focus of policy discussion surrounding the determinants of economic growth. Over 140 regressors have been employed in growth empirics due to the proliferation of several new growth theories in the past two decades. Recently Bayesian model averaging (BMA) has been employed to address model uncertainty and to provide clear policy implications by identifying robust growth determinants. The BMA approaches were, however, limited to linear regression models that abstract from possible dependencies embedded in the covariance structures of growth determinants. The recent empirical growth literature has developed jointness measures to highlight such dependencies. We address model uncertainty and covariate dependencies in a comprehensive Bayesian framework that allows for structural learning in linear regressions and Gaussian graphical models. A common prior specification across the entire comprehensive framework provides consistency. Gaussian graphical models allow for a principled analysis of dependency structures, which allows us to generate a much more parsimonious set of fundamental growth determinants. Our empirics are based on a prominent growth dataset with 41 potential economic factors that has been utilized in numerous previous analyses to account for model uncertainty as well as jointness.  相似文献   

2.
We propose a Bayesian nonparametric instrumental variable approach under additive separability that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specification with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV. Supplementary materials for this article are available online.  相似文献   

3.
We extend the Bayesian Model Averaging (BMA) framework to dynamic panel data models with endogenous regressors using a Limited Information Bayesian Model Averaging (LIBMA) methodology. Monte Carlo simulations confirm the asymptotic performance of our methodology both in BMA and selection, with high posterior inclusion probabilities for all relevant regressors, and parameter estimates very close to their true values. In addition, we illustrate the use of LIBMA by estimating a dynamic gravity model for bilateral trade. Once model uncertainty, dynamics, and endogeneity are accounted for, we find several factors that are robustly correlated with bilateral trade. We also find that applying methodologies that do not account for either dynamics or endogeneity (or both) results in different sets of robust determinants.  相似文献   

4.
We present a Bayesian analysis of a piecewise linear model constructed by using basis functions which generalizes the univariate linear spline to higher dimensions. Prior distributions are adopted on both the number and the locations of the splines, which leads to a model averaging approach to prediction with predictive distributions that take into account model uncertainty. Conditioning on the data produces a Bayes local linear model with distributions on both predictions and local linear parameters. The method is spatially adaptive and covariate selection is achieved by using splines of lower dimension than the data.  相似文献   

5.
In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented.  相似文献   

6.
Summary.  The paper develops a Bayesian hierarchical model for estimating the catch at age of cod landed in Norway. The model includes covariate effects such as season and gear, and can also account for the within-boat correlation. The hierarchical structure allows us to account properly for the uncertainty in the estimates.  相似文献   

7.
Borrowing data from external control has been an appealing strategy for evidence synthesis when conducting randomized controlled trials (RCTs). Often named hybrid control trials, they leverage existing control data from clinical trials or potentially real-world data (RWD), enable trial designs to allocate more patients to the novel intervention arm, and improve the efficiency or lower the cost of the primary RCT. Several methods have been established and developed to borrow external control data, among which the propensity score methods and Bayesian dynamic borrowing framework play essential roles. Noticing the unique strengths of propensity score methods and Bayesian hierarchical models, we utilize both methods in a complementary manner to analyze hybrid control studies. In this article, we review methods including covariate adjustments, propensity score matching and weighting in combination with dynamic borrowing and compare the performance of these methods through comprehensive simulations. Different degrees of covariate imbalance and confounding are examined. Our findings suggested that the conventional covariate adjustment in combination with the Bayesian commensurate prior model provides the highest power with good type I error control under the investigated settings. It has desired performance especially under scenarios of different degrees of confounding. To estimate efficacy signals in the exploratory setting, the covariate adjustment method in combination with the Bayesian commensurate prior is recommended.  相似文献   

8.
When some explanatory variables in a regression are correlated with the disturbance term, instrumental variable methods are typically employed to make reliable inferences. Furthermore, to avoid difficulties associated with weak instruments, identification-robust methods are often proposed. However, it is hard to assess whether an instrumental variable is valid in practice because instrument validity is based on the questionable assumption that some of them are exogenous. In this paper, we focus on structural models and analyze the effects of instrument endogeneity on two identification-robust procedures, the Anderson–Rubin (1949, AR) and the Kleibergen (2002, K) tests, with or without weak instruments. Two main setups are considered: (1) the level of “instrument” endogeneity is fixed (does not depend on the sample size) and (2) the instruments are locally exogenous, i.e. the parameter which controls instrument endogeneity approaches zero as the sample size increases. In the first setup, we show that both test procedures are in general consistent against the presence of invalid instruments (hence asymptotically invalid for the hypothesis of interest), whether the instruments are “strong” or “weak”. We also describe cases where test consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. These include, in particular, cases where the 2SLS estimator remains consistent, but the AR and K tests are asymptotically invalid. In the second setup, we find (non-degenerate) asymptotic non-central chi-square distributions in all cases, and describe cases where the non-centrality parameter is zero and the asymptotic distribution remains the same as in the case of valid instruments (despite the presence of invalid instruments). Overall, our results underscore the importance of checking for the presence of possibly invalid instruments when applying “identification-robust” tests.  相似文献   

9.
In this contribution we aim at improving ordinal variable selection in the context of causal models for credit risk estimation. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric thus keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to ordinal covariates that correspond to various ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear method for variable averaging of the estimated default probabilities.  相似文献   

10.
Survivaldata may include two different sources of variation, namely variationover time and variation over units. If both of these variationsare present, neglecting one of them can cause serious bias inthe estimations. Here we present an approach for discrete durationdata that includes both time–varying and unit–specificeffects to model these two variations simultaneously. The approachis a combination of a dynamic survival model with dynamic time–varyingbaseline and covariate effects and a frailty model measuringunobserved heterogeneity with random effects varying independentlyover units. Estimation is based on posterior modes, i.e., wemaximize the joint posterior distribution of the unknown parametersto avoid numerical integration and simulation techniques, thatare necessary in a full Bayesian analysis. Estimation of unknownhyperparameters is achieved by an EM–type algorithm. Finally,the proposed method is applied to data of the Veteran's AdministrationLung Cancer Trial.  相似文献   

11.
Survival data obtained from prevalent cohort study designs are often subject to length-biased sampling. Frequentist methods including estimating equation approaches, as well as full likelihood methods, are available for assessing covariate effects on survival from such data. Bayesian methods allow a perspective of probability interpretation for the parameters of interest, and may easily provide the predictive distribution for future observations while incorporating weak prior knowledge on the baseline hazard function. There is lack of Bayesian methods for analyzing length-biased data. In this paper, we propose Bayesian methods for analyzing length-biased data under a proportional hazards model. The prior distribution for the cumulative hazard function is specified semiparametrically using I-Splines. Bayesian conditional and full likelihood approaches are developed for analyzing simulated and real data.  相似文献   

12.
We present a method for using posterior samples produced by the computer program BUGS (Bayesian inference Using Gibbs Sampling) to obtain approximate profile likelihood functions of parameters or functions of parameters in directed graphical models with incomplete data. The method can also be used to approximate integrated likelihood functions. It is easily implemented and it performs a good approximation. The profile likelihood represents an aspect of the parameter uncertainty which does not depend on the specification of prior distributions, and it can be used as a worthwhile supplement to BUGS that enable us to do both Bayesian and likelihood based analyses in directed graphical models.  相似文献   

13.
Bayesian inference for the superposition of nonhomogeneous Poisson processes is studied. A Markov-chain Monte Carlo method with data augmentation is developed to compute the features of the posterior distribution. For each observed failure epoch, a latent variable is introduced that indicates which component of the superposition model gives rise to the failure. This data-augmentation approach facilitates specification of the transitional kernel in the Markov chain. Moreover, new Bayesian tests are developed for the full superposition model against simpler submodels. Model determination by a predictive likelihood approach is studied. A numerical example based on a real data set is given.  相似文献   

14.
Propensity score analysis (PSA) is a technique to correct for potential confounding in observational studies. Covariate adjustment, matching, stratification, and inverse weighting are the four most commonly used methods involving propensity scores. The main goal of this research is to determine which PSA method performs the best in terms of protecting against spurious association detection, as measured by Type I error rate, while maintaining sufficient power to detect a true association, if one exists. An examination of these PSA methods along with ordinary least squares regression was conducted under two cases: correct PSA model specification and incorrect PSA model specification. PSA covariate adjustment and PSA matching maintain the nominal Type I error rate, when the PSA model is correctly specified, but only PSA covariate adjustment achieves adequate power levels. Other methods produced conservative Type I Errors in some scenarios, while liberal Type I error rates were observed in other scenarios.  相似文献   

15.
In this paper, we study the identification of Bayesian regression models, when an ordinal covariate is subject to unidirectional misclassification. Xia and Gustafson [Bayesian regression models adjusting for unidirectional covariate misclassification. Can J Stat. 2016;44(2):198–218] obtained model identifiability for non-binary regression models, when there is a binary covariate subject to unidirectional misclassification. In the current paper, we establish the moment identifiability of regression models for misclassified ordinal covariates with more than two categories, based on forms of observable moments. Computational studies are conducted that confirm the theoretical results. We apply the method to two datasets, one from the Medical Expenditure Panel Survey (MEPS), and the other from Translational Research Investigating Underlying Disparities in Acute Myocardial infarction Patients Health Status (TRIUMPH).  相似文献   

16.
We study how different prior assumptions on the spatially structured heterogeneity term of the convolution hierarchical Bayesian model for spatial disease data could affect the results of an ecological analysis when response and exposure exhibit a strong spatial pattern. We show that in this case the estimate of the regression parameter could be strongly biased, both by analyzing the association between lung cancer mortality and education level on a real dataset and by a simulation experiment. The analysis is based on a hierarchical Bayesian model with a time dependent covariate in which we allow for a latency period between exposure and mortality, with time and space random terms and misaligned exposure-disease data.  相似文献   

17.
The authors propose a novel class of cure rate models for right‐censored failure time data. The class is formulated through a transformation on the unknown population survival function. It includes the mixture cure model and the promotion time cure model as two special cases. The authors propose a general form of the covariate structure which automatically satisfies an inherent parameter constraint and includes the corresponding binomial and exponential covariate structures in the two main formulations of cure models. The proposed class provides a natural link between the mixture and the promotion time cure models, and it offers a wide variety of new modelling structures as well. Within the Bayesian paradigm, a Markov chain Monte Carlo computational scheme is implemented for sampling from the full conditional distributions of the parameters. Model selection is based on the conditional predictive ordinate criterion. The use of the new class of models is illustrated with a set of real data involving a melanoma clinical trial.  相似文献   

18.
We describe a Bayesian model for a scenario in which the population of errors contains many 0s and there is a known covariate. This kind of structure typically occurs in auditing, and we use auditing as the driving application of the method. Our model is based on a categorization of the error population together with a Bayesian nonparametric method of modelling errors within some of the categories. Inference is through simulation. We conclude with an example based on a data set provided by the UK's National Audit Office.  相似文献   

19.
The goal of this paper is to compare several widely used Bayesian model selection methods in practical model selection problems, highlight their differences and give recommendations about the preferred approaches. We focus on the variable subset selection for regression and classification and perform several numerical experiments using both simulated and real world data. The results show that the optimization of a utility estimate such as the cross-validation (CV) score is liable to finding overfitted models due to relatively high variance in the utility estimates when the data is scarce. This can also lead to substantial selection induced bias and optimism in the performance evaluation for the selected model. From a predictive viewpoint, best results are obtained by accounting for model uncertainty by forming the full encompassing model, such as the Bayesian model averaging solution over the candidate models. If the encompassing model is too complex, it can be robustly simplified by the projection method, in which the information of the full model is projected onto the submodels. This approach is substantially less prone to overfitting than selection based on CV-score. Overall, the projection method appears to outperform also the maximum a posteriori model and the selection of the most probable variables. The study also demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.  相似文献   

20.
The use of the Cox proportional hazards regression model is wide-spread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号