首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We study the problem of selecting a regularization parameter in penalized Gaussian graphical models. When the goal is to obtain a model with good predictive power, cross-validation is the gold standard. We present a new estimator of Kullback–Leibler loss in Gaussian Graphical models which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross-validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross-validation, such as Akaike's information criterion and Generalized approximate cross-validation. We also show that the estimator can be used to improve the performance of the Bayesian information criterion when the sample size is small.  相似文献   

2.
Recent literature provides many computational and modeling approaches for covariance matrices estimation in a penalized Gaussian graphical models but relatively little study has been carried out on the choice of the tuning parameter. This paper tries to fill this gap by focusing on the problem of shrinkage parameter selection when estimating sparse precision matrices using the penalized likelihood approach. Previous approaches typically used K-fold cross-validation in this regard. In this paper, we first derived the generalized approximate cross-validation for tuning parameter selection which is not only a more computationally efficient alternative, but also achieves smaller error rate for model fitting compared to leave-one-out cross-validation. For consistency in the selection of nonzero entries in the precision matrix, we employ a Bayesian information criterion which provably can identify the nonzero conditional correlations in the Gaussian model. Our simulations demonstrate the general superiority of the two proposed selectors in comparison with leave-one-out cross-validation, 10-fold cross-validation and Akaike information criterion.  相似文献   

3.
Zero-inflated count models are increasingly employed in many fields in case of “zero-inflation”. In modeling road traffic crashes, it has also shown to be useful in obtaining a better model-fitting when zero crash counts are over-presented. However, the general specification of zero-inflated model can not account for the multilevel data structure in crash data, which may be an important source of over-dispersion. This paper examines zero-inflated Poisson regression with site-specific random effects (REZIP) with comparison to random effect Poisson model and standard zero-inflated poison model. A practical and flexible procedure, using Bayesian inference with Markov Chain Monte Carlo algorithm and cross-validation predictive density techniques, is applied for model calibration and suitability assessment. Using crash data in Singapore (1998–2005), the illustrative results demonstrate that the REZIP model may significantly improve the model-fitting and predictive performance of crash prediction models. This improvement can contribute to traffic safety management and engineering practices such as countermeasure design and safety evaluation of traffic treatments.  相似文献   

4.
In its application to variable selection in the linear model, cross-validation is traditionally applied to an individual model contained in a set of potential models. Each model in the set is cross-validated independently of the rest and the model with the smallest cross-validated sum of squares is selected. In such settings, an efficient algorithm for cross-validation must be able to add and to delete single points quickly from a mixed model. Recent work in variable selection has applied cross-validation to an entire process of variable selection, such as Backward Elimination or Stepwise regression (Thall, Simon and Grier, 1992). The cross-validated version of Backward Elimination, for example, divides the data into an estimation and validation set and performs a complete Backward Elimination on the estimation set, while computing the cross-validated sum of squares at each step with the validation set. After doing this process once, a different validation set is selected and the process is repeated. The final model selection is based on the cross-validated sum of squares for all Backward Eliminations. An optimal algorithm for this application of cross-validation need not be efficient in adding and deleting observations from a single model but must be efficient in computing the cross-validation sum of squares from a series of models using a common validation set. This paper explores such an algorithm based on the sweep operator.  相似文献   

5.
In parametric regression models the sign of a coefficient often plays an important role in its interpretation. One possible approach to model selection in these situations is to consider a loss function that formulates prediction of the sign of a coefficient as a decision problem. Taking a Bayesian approach, we extend this idea of a sign based loss for selection to more complex situations. In generalized additive models we consider prediction of the sign of the derivative of an additive term at a set of predictors. Being able to predict the sign of the derivative at some point (that is, whether a term is increasing or decreasing) is one approach to selection of terms in additive modelling when interpretation is the main goal. For models with interactions, prediction of the sign of a higher order derivative can be used similarly. There are many advantages to our sign-based strategy for selection: one can work in a full or encompassing model without the need to specify priors on a model space and without needing to specify priors on parameters in submodels. Also, avoiding a search over a large model space can simplify computation. We consider shrinkage prior specifications on smoothing parameters that allow for good predictive performance in models with large numbers of terms without the need for selection, and a frequentist calibration of the parameter in our sign-based loss function when it is desired to control a false selection rate for interpretation.  相似文献   

6.
This paper presents a Bayesian method for the analysis of toxicological multivariate mortality data when the discrete mortality rate for each family of subjects at a given time depends on familial random effects and the toxicity level experienced by the family. Our aim is to model and analyse one set of such multivariate mortality data with large family sizes: the potassium thiocyanate (KSCN) tainted fish tank data of O'Hara Hines. The model used is based on a discretized hazard with additional time-varying familial random effects. A similar previous study (using sodium thiocyanate (NaSCN)) is used to construct a prior for the parameters in the current study. A simulation-based approach is used to compute posterior estimates of the model parameters and mortality rates and several other quantities of interest. Recent tools in Bayesian model diagnostics and variable subset selection have been incorporated to verify important modelling assumptions regarding the effects of time and heterogeneity among the families on the mortality rate. Further, Bayesian methods using predictive distributions are used for comparing several plausible models.  相似文献   

7.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

8.
We developed a flexible non-parametric Bayesian model for regional disease-prevalence estimation based on cross-sectional data that are obtained from several subpopulations or clusters such as villages, cities, or herds. The subpopulation prevalences are modeled with a mixture distribution that allows for zero prevalence. The distribution of prevalences among diseased subpopulations is modeled as a mixture of finite Polya trees. Inferences can be obtained for (1) the proportion of diseased subpopulations in a region, (2) the distribution of regional prevalences, (3) the mean and median prevalence in the region, (4) the prevalence of any sampled subpopulation, and (5) predictive distributions of prevalences for regional subpopulations not included in the study, including the predictive probability of zero prevalence. We focus on prevalence estimation using data from a single diagnostic test, but we also briefly discuss the scenario where two conditionally dependent (or independent) diagnostic tests are used. Simulated data demonstrate the utility of our non-parametric model over parametric analysis. An example involving brucellosis in cattle is presented.  相似文献   

9.
In this paper we deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate. We consider a cure rate model based on the negative binomial distribution, encompassing as a special case the promotion time cure model. Bayesian analysis is based on Markov chain Monte Carlo (MCMC) methods. We also present some discussion on model selection and an illustration with a real data set.  相似文献   

10.
Summary.  Short-term forecasts of air pollution levels in big cities are now reported in news-papers and other media outlets. Studies indicate that even short-term exposure to high levels of an air pollutant called atmospheric particulate matter can lead to long-term health effects. Data are typically observed at fixed monitoring stations throughout a study region of interest at different time points. Statistical spatiotemporal models are appropriate for modelling these data. We consider short-term forecasting of these spatiotemporal processes by using a Bayesian kriged Kalman filtering model. The spatial prediction surface of the model is built by using the well-known method of kriging for optimum spatial prediction and the temporal effects are analysed by using the models underlying the Kalman filtering method. The full Bayesian model is implemented by using Markov chain Monte Carlo techniques which enable us to obtain the optimal Bayesian forecasts in time and space. A new cross-validation method based on the Mahalanobis distance between the forecasts and observed data is also developed to assess the forecasting performance of the model implemented.  相似文献   

11.
In segmentation problems, inference on change-point position and model selection are two difficult issues due to the discrete nature of change-points. In a Bayesian context, we derive exact, explicit and tractable formulae for the posterior distribution of variables such as the number of change-points or their positions. We also demonstrate that several classical Bayesian model selection criteria can be computed exactly. All these results are based on an efficient strategy to explore the whole segmentation space, which is very large. We illustrate our methodology on both simulated data and a comparative genomic hybridization profile.  相似文献   

12.
The generalized cross-validation (GCV) method has been a popular technique for the selection of tuning parameters for smoothing and penalty, and has been a standard tool to select tuning parameters for shrinkage models in recent works. Its computational ease and robustness compared to the cross-validation method makes it competitive for model selection as well. It is well known that the GCV method performs well for linear estimators, which are linear functions of the response variable, such as ridge estimator. However, it may not perform well for nonlinear estimators since the GCV emphasizes linear characteristics by taking the trace of the projection matrix. This paper aims to explore the GCV for nonlinear estimators and to further extend the results to correlated data in longitudinal studies. We expect that the nonlinear GCV and quasi-GCV developed in this paper will provide similar tools for the selection of tuning parameters in linear penalty models and penalized GEE models.  相似文献   

13.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   

14.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   

15.
Survival data obtained from prevalent cohort study designs are often subject to length-biased sampling. Frequentist methods including estimating equation approaches, as well as full likelihood methods, are available for assessing covariate effects on survival from such data. Bayesian methods allow a perspective of probability interpretation for the parameters of interest, and may easily provide the predictive distribution for future observations while incorporating weak prior knowledge on the baseline hazard function. There is lack of Bayesian methods for analyzing length-biased data. In this paper, we propose Bayesian methods for analyzing length-biased data under a proportional hazards model. The prior distribution for the cumulative hazard function is specified semiparametrically using I-Splines. Bayesian conditional and full likelihood approaches are developed for analyzing simulated and real data.  相似文献   

16.
Many credit risk models are based on the selection of a single logistic regression model, on which to base parameter estimation. When many competing models are available, and without enough guidance from economical theory, model averaging represents an appealing alternative to the selection of single models. Despite model averaging approaches have been present in statistics for many years, only recently they are starting to receive attention in economics and finance applications. This contribution shows how Bayesian model averaging can be applied to credit risk estimation, a research area that has received a great deal of attention recently, especially in the light of the global financial crisis of the last few years and the correlated attempts to regulate international finance. The paper considers the use of logistic regression models under the Bayesian Model Averaging paradigm. We argue that Bayesian model averaging is not only more correct from a theoretical viewpoint, but also slightly superior, in terms of predictive performance, with respect to single selected models.  相似文献   

17.
This article explores an ‘Edge Selection’ procedure to fit an undirected graph to a given data set. Undirected graphs are routinely used to represent, model and analyse associative relationships among the entities on a social, biological or genetic network. Our proposed method combines the computational efficiency of least angle regression and at the same time ensures symmetry of the selected adjacency matrix. Various local and global properties of the edge selection path are explored analytically. In particular, a suitable parameter that controls the amount of shrinkage is identified and we consider several cross-validation techniques to choose an accurate predictive model on the path. The proposed method is illustrated with a detailed simulation study involving models with various levels of sparsity and variability in the nodal degree distributions. Finally, our method is used to select undirected graphs from various real data sets. We employ it for identifying the regulatory network of isoprenoid pathways from a gene-expression data and also to identify genetic network from a high-dimensional breast cancer study data.  相似文献   

18.
Abstract. This article combines the best of both objective and subjective Bayesian inference in specifying priors for inequality and equality constrained analysis of variance models. Objectivity can be found in the use of training data to specify a prior distribution, subjectivity can be found in restrictions on the prior to formulate models. The aim of this article is to find the best model in a set of models specified using inequality and equality constraints on the model parameters. For the evaluation of the models an encompassing prior approach is used. The advantage of this approach is that only a prior for the unconstrained encompassing model needs to be specified. The priors for all constrained models can be derived from this encompassing prior. Different choices for this encompassing prior will be considered and evaluated.  相似文献   

19.
Predictive criteria, including the adjusted squared multiple correlation coefficient, the adjusted concordance correlation coefficient, and the predictive error sum of squares, are available for model selection in the linear mixed model. These criteria all involve some sort of comparison of observed values and predicted values, adjusted for the complexity of the model. The predicted values can be conditional on the random effects or marginal, i.e., based on averages over the random effects. These criteria have not been investigated for model selection success.

We used simulations to investigate selection success rates for several versions of these predictive criteria as well as several versions of Akaike's information criterion and the Bayesian information criterion, and the pseudo F-test. The simulations involved the simple scenario of selection of a fixed parameter when the covariance structure is known.

Several variance–covariance structures were used. For compound symmetry structures, higher success rates for the predictive criteria were obtained when marginal rather than conditional predicted values were used. Information criteria had higher success rates when a certain term (normally left out in SAS MIXED computations) was included in the criteria. Various penalty functions were used in the information criteria, but these had little effect on success rates. The pseudo F-test performed as expected. For the autoregressive with random effects structure, the results were the same except that success rates were higher for the conditional version of the predictive error sum of squares.

Characteristics of the data, such as the covariance structure, parameter values, and sample size, greatly impacted performance of various model selection criteria. No one criterion was consistently better than the others.  相似文献   

20.
First a comprehensive treatment of the hierarchical-conjugate Bayesian predictive approach to binary survey data is presented, encompassing simple random, stratified, cluster, and two-stage sampling, as well as two-stage sampling within strata. For the case of two-stage sampling within strata when there is more than one variable of stratification, analysis using an unsaturated logit linear model on the prior means is proposed. This allows there to be cells containing no sampled clusters. Formulas for posterior predictive means, variances, and covariances of numbers of successes in unsampled portions of clusters are presented in terms of posterior expectations of certain functions of hyperparameters; these may be evaluated by existing methods. The technique is illustrated using a small subset of Canada Youth & AIDS Study data. A sample of students within each of various selected school boards was chosen and interviewed via questionnaire. The boards were stratified/poststratified in two dimensions, but some of the resulting cells contained no data. The additive logit linear model on the prior means produced estimates and posterior variances for boards in all cells. Data showed the additive model to be plausible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号