首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

2.
The authors discuss a general class of hierarchical ordinal regression models that includes both location and scale parameters, allows link functions to be selected adaptively as finite mixtures of normal cumulative distribution functions, and incorporates flexible correlation structures for the latent scale variables. Exploiting the well‐known correspondence between ordinal regression models and parametric ROC (Receiver Operating Characteristic) curves makes it possible to use a hierarchical ROC (HROC) analysis to study multilevel clustered data in diagnostic imaging studies. The authors present a Bayesian approach to model fitting using Markov chain Monte Carlo methods and discuss HROC applications to the analysis of data from two diagnostic radiology studies involving multiple interpreters.  相似文献   

3.
Summary.  The paper proposes two Bayesian approaches to non-parametric monotone function estimation. The first approach uses a hierarchical Bayes framework and a characterization of smooth monotone functions given by Ramsay that allows unconstrained estimation. The second approach uses a Bayesian regression spline model of Smith and Kohn with a mixture distribution of constrained normal distributions as the prior for the regression coefficients to ensure the monotonicity of the resulting function estimate. The small sample properties of the two function estimators across a range of functions are provided via simulation and compared with existing methods. Asymptotic results are also given that show that Bayesian methods provide consistent function estimators for a large class of smooth functions. An example is provided involving economic demand functions that illustrates the application of the constrained regression spline estimator in the context of a multiple-regression model where two functions are constrained to be monotone.  相似文献   

4.
Bridge penalized regression has many desirable statistical properties such as unbiasedness, sparseness as well as ‘oracle’. In Bayesian framework, bridge regularized penalty can be implemented based on generalized Gaussian distribution (GGD) prior. In this paper, we incorporate Bayesian bridge-randomized penalty and its adaptive version into the quantile regression (QR) models with autoregressive perturbations to conduct Bayesian penalization estimation. Employing the working likelihood of the asymmetric Laplace distribution (ALD) perturbations, the Bayesian joint hierarchical models are established. Based on the mixture representations of the ALD and generalized Gaussian distribution (GGD) priors of coefficients, the hybrid algorithms based on Gibbs sampler and Metropolis-Hasting sampler are provided to conduct fully Bayesian posterior estimation. Finally, the proposed Bayesian procedures are illustrated by some simulation examples and applied to a real data application of the electricity consumption.  相似文献   

5.
This article considers Bayesian inference in the interval constrained normal linear regression model. Whereas much of the previous literature has concentrated on the case where the prior constraint is correctly specified, our framework explicitly allows for the possibility of an invalid constraint. We adopt a non-informative prior and uncertainty concerning the interval restriction is represented by two prior odds ratios. The sampling theoretic risk of the resulting Bayesian interval pre-test estimator is derived, illustrated and explored. The authors are grateful to the editor and the referee for their helpful comments.  相似文献   

6.
This paper considers quantile regression models using an asymmetric Laplace distribution from a Bayesian point of view. We develop a simple and efficient Gibbs sampling algorithm for fitting the quantile regression model based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the resulting Gibbs sampler can be accomplished by sampling from either normal or generalized inverse Gaussian distribution. We also discuss some possible extensions of our approach, including the incorporation of a scale parameter, the use of double exponential prior, and a Bayesian analysis of Tobit quantile regression. The proposed methods are illustrated by both simulated and real data.  相似文献   

7.
We present a Bayesian analysis framework for matrix-variate normal data with dependency structures induced by rows and columns. This framework of matrix normal models includes prior specifications, posterior computation using Markov chain Monte Carlo methods, evaluation of prediction uncertainty, model structure search, and extensions to multidimensional arrays. Compared with Bayesian probabilistic matrix factorization, which integrates a Gaussian prior for single row of the data matrix, our proposed model, namely Bayesian hierarchical kernelized probabilistic matrix factorization, imposes Gaussian Process priors over multiple rows of the matrix. Hence, the learned model explicitly captures the underlying correlation among the rows and the columns. In addition, our method requires no specific assumptions like independence of latent factors for rows and columns, which obtains more flexibility for modeling real data compared to existing works. Finally, the proposed framework can be adapted to a wide range of applications, including multivariate analysis, times series, and spatial modeling. Experiments highlight the superiority of the proposed model in handling model uncertainty and model optimization.  相似文献   

8.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

9.
Bayesian quantile regression for single-index models   总被引:2,自引:0,他引:2  
Using an asymmetric Laplace distribution, which provides a mechanism for Bayesian inference of quantile regression models, we develop a fully Bayesian approach to fitting single-index models in conditional quantile regression. In this work, we use a Gaussian process prior for the unknown nonparametric link function and a Laplace distribution on the index vector, with the latter motivated by the recent popularity of the Bayesian lasso idea. We design a Markov chain Monte Carlo algorithm for posterior inference. Careful consideration of the singularity of the kernel matrix, and tractability of some of the full conditional distributions leads to a partially collapsed approach where the nonparametric link function is integrated out in some of the sampling steps. Our simulations demonstrate the superior performance of the Bayesian method versus the frequentist approach. The method is further illustrated by an application to the hurricane data.  相似文献   

10.
In this note the problem of nonparametric regression function estimation in a random design regression model with Gaussian errors is considered from the Bayesian perspective. It is assumed that the regression function belongs to a class of functions with a known degree of smoothness. A prior distribution on the given class can be induced by a prior on the coefficients in a series expansion of the regression function through an orthonormal system. The rate of convergence of the resulting posterior distribution is employed to provide a measure of the accuracy of the Bayesian estimation procedure defined by the posterior expected regression function. We show that the Bayes’ estimator achieves the optimal minimax rate of convergence under mean integrated squared error over the involved class of regression functions, thus being comparable to other popular frequentist regression estimators.  相似文献   

11.
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions, or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This article uses concepts from Bayesian inference and the maximum entropy principle to estimate the prior probability distribution, uncertainty, and correlations of source data when such information is not explicitly provided. In the absence of additional information, an isolated datum is described by a truncated Gaussian distribution, and if an uncertainty estimate is missing, its prior equals the best guess. When the sum of a set of disaggregate data is constrained to match an aggregate datum, it is possible to determine the prior correlations among disaggregate data. If aggregate uncertainty is missing, all prior correlations are positive. If aggregate uncertainty is available, prior correlations can be either all positive, all negative, or a mix of both. An empirical example is presented, which reports relative uncertainties and correlation priors for the County Business Patterns database. In this example, relative uncertainties range from 1% to 80% and 20% of data pairs exhibit correlations below ?0.9 or above 0.9. Supplementary materials for this article are available online.  相似文献   

12.
ABSTRACT

This paper deals with Bayes, robust Bayes, and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference, the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not able to specify the prior totally or when a problem must be solved by two statisticians, they may agree on the choice of the prior but not the values of the hyperparameters. A common approach to the prior uncertainty in Bayesian analysis is to choose a class of prior distributions and compute some functional quantity. This is known as Robust Bayesian analysis which provides a way to consider the prior knowledge in terms of a class of priors Γ for global prevention against bad choices of hyperparameters. Under a scale invariant precautionary loss function, we deal with robust Bayes predictions of Y based on X. We carried out a simulation study and a real data analysis to illustrate the practical utility of the prediction procedure.  相似文献   

13.
A novel class of hierarchical nonparametric Bayesian survival regression models for time-to-event data with uninformative right censoring is introduced. The survival curve is modeled as a random function whose prior distribution is defined using the beta-Stacy (BS) process. The prior mean of each survival probability and its prior variance are linked to a standard parametric survival regression model. This nonparametric survival regression can thus be anchored to any reference parametric form, such as a proportional hazards or an accelerated failure time model, allowing substantial departures of the predictive survival probabilities when the reference model is not supported by the data. Also, under this formulation the predictive survival probabilities will be close to the empirical survival distribution near the mode of the reference model and they will be shrunken towards its probability density in the tails of the empirical distribution.  相似文献   

14.
This paper surveys various shrinkage, smoothing and selection priors from a unifying perspective and shows how to combine them for Bayesian regularisation in the general class of structured additive regression models. As a common feature, all regularisation priors are conditionally Gaussian, given further parameters regularising model complexity. Hyperpriors for these parameters encourage shrinkage, smoothness or selection. It is shown that these regularisation (log-) priors can be interpreted as Bayesian analogues of several well-known frequentist penalty terms. Inference can be carried out with unified and computationally efficient MCMC schemes, estimating regularised regression coefficients and basis function coefficients simultaneously with complexity parameters and measuring uncertainty via corresponding marginal posteriors. For variable and function selection we discuss several variants of spike and slab priors which can also be cast into the framework of conditionally Gaussian priors. The performance of the Bayesian regularisation approaches is demonstrated in a hazard regression model and a high-dimensional geoadditive regression model.  相似文献   

15.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

16.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

17.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   

18.
Normality and independence of error terms are typical assumptions for partial linear models. However, these assumptions may be unrealistic in many fields, such as economics, finance and biostatistics. In this paper, a Bayesian analysis for partial linear model with first-order autoregressive errors belonging to the class of the scale mixtures of normal distributions is studied in detail. The proposed model provides a useful generalization of the symmetrical linear regression model with independent errors, since the distribution of the error term covers both correlated and thick-tailed distributions, and has a convenient hierarchical representation allowing easy implementation of a Markov chain Monte Carlo scheme. In order to examine the robustness of the model against outlying and influential observations, a Bayesian case deletion influence diagnostics based on the Kullback–Leibler (K–L) divergence is presented. The proposed method is applied to monthly and daily returns of two Chilean companies.  相似文献   

19.
A regression model with skew-normal errors provides a useful extension for ordinary normal regression models when the dataset under consideration involves asymmetric outcomes. In this article, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis for joint location and scale nonlinear models with skew-normal errors, which relax the normality assumption and include the normal one as a special case. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of MCMC methods to simulate samples from the joint posterior distribution. Finally, simulation studies and a real example are used to illustrate the proposed methodology.  相似文献   

20.
We study objective Bayesian inference for linear regression models with residual errors distributed according to the class of two-piece scale mixtures of normal distributions. These models allow for capturing departures from the usual assumption of normality of the errors in terms of heavy tails, asymmetry, and certain types of heteroscedasticity. We propose a general non-informative, scale-invariant, prior structure and provide sufficient conditions for the propriety of the posterior distribution of the model parameters, which cover cases when the response variables are censored. These results allow us to apply the proposed models in the context of survival analysis. This paper represents an extension to the Bayesian framework of the models proposed in [16]. We present a simulation study that shows good frequentist properties of the posterior credible intervals as well as point estimators associated to the proposed priors. We illustrate the performance of these models with real data in the context of survival analysis of cancer patients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号