首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Lasso has sparked interest in the use of penalization of the log‐likelihood for variable selection, as well as for shrinkage. We are particularly interested in the more‐variables‐than‐observations case of characteristic importance for modern data. The Bayesian interpretation of the Lasso as the maximum a posteriori estimate of the regression coefficients, which have been given independent, double exponential prior distributions, is adopted. Generalizing this prior provides a family of hyper‐Lasso penalty functions, which includes the quasi‐Cauchy distribution of Johnstone and Silverman as a special case. The properties of this approach, including the oracle property, are explored, and an EM algorithm for inference in regression problems is described. The posterior is multi‐modal, and we suggest a strategy of using a set of perfectly fitting random starting values to explore modes in different regions of the parameter space. Simulations show that our procedure provides significant improvements on a range of established procedures, and we provide an example from chemometrics.  相似文献   

2.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   

3.
In this paper, we develop a variable selection framework with the spike-and-slab prior distribution via the hazard function of the Cox model. Specifically, we consider the transformation of the score and information functions for the partial likelihood function evaluated at the given data from the parameter space into the space generated by the logarithm of the hazard ratio. Thereby, we reduce the nonlinear complexity of the estimation equation for the Cox model and allow the utilization of a wider variety of stable variable selection methods. Then, we use a stochastic variable search Gibbs sampling approach via the spike-and-slab prior distribution to obtain the sparsity structure of the covariates associated with the survival outcome. Additionally, we conduct numerical simulations to evaluate the finite-sample performance of our proposed method. Finally, we apply this novel framework on lung adenocarcinoma data to find important genes associated with decreased survival in subjects with the disease.  相似文献   

4.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

5.
Under a Gamma prior distribution, the importance sampling (IS) technique is applied to the Bayesian analysis of the Power Law Process (PLP). Samples of important parameters in the PLP are obtained from IS. Based on these samples, not only the posterior analyses of parameters and some functions of the parameter in the PLP can be performed conveniently, but also single-sample and two-sample predictions are constructed easily by the transformation formula of double integral. The sensitivity of the posterior mean of the parameter functions in the PLP is studied with respect to the prior moments in the Gamma prior distribution, and it can guide the selections of the prior moments. After some numerical experiments illustrate the rationality and feasibility of the proposed methods, an engineering example demonstrates its application.  相似文献   

6.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

7.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   

8.
We use a Bayesian approach to fitting a linear regression model to transformations of the natural parameter for the exponential class of distributions. The usual Bayesian approach is to assume that a linear model exactly describes the relationship among the natural parameters. We assume only that a linear model is approximately in force. We approximate the theta-links by using a linear model obtained by minimizing the posterior expectation of a loss function.While some posterior results can be obtained analytically considerable generality follows from an exact Monte Carlo method for obtaining random samples of parameter values or functions of parameter values from their respective posterior distributions. The approach that is presented is justified for small samples, requires only one-dimensional numerical integrations, and allows for the use of regression matrices with less than full column rank. Two numerical examples are provided.  相似文献   

9.
Bayesian hierarchical formulations are utilized by the U.S. Bureau of Labor Statistics (BLS) with respondent‐level data for missing item imputation because these formulations are readily parameterized to capture correlation structures. BLS collects survey data under informative sampling designs that assign probabilities of inclusion to be correlated with the response on which sampling‐weighted pseudo posterior distributions are estimated for asymptotically unbiased inference about population model parameters. Computation is expensive and does not support BLS production schedules. We propose a new method to scale the computation that divides the data into smaller subsets, estimates a sampling‐weighted pseudo posterior distribution, in parallel, for every subset and combines the pseudo posterior parameter samples from all the subsets through their mean in the Wasserstein space of order 2. We construct conditions on a class of sampling designs where posterior consistency of the proposed method is achieved. We demonstrate on both synthetic data and in application to the Current Employment Statistics survey that our method produces results of similar accuracy as the usual approach while offering substantially faster computation.  相似文献   

10.
Suppose some quantiles of the prior distribution of a nonnegative parameter θ are specified. Instead of eliciting just one prior density function, consider the class Γ of all the density functions compatible with the quantile specification. Given a likelihood function, find the posterior upper and lower bounds for the expected value of any real-valued function h(θ), as the density varies in Γ. Such a scheme agrees with a robust Bayesian viewpoint. Under mild regularity conditions about h(θ) and the likelihood, a procedure for finding bounds is derived and applied to an example, after transforming the given functional optimisation problems into finite-dimensional ones.  相似文献   

11.
A nonasymptotic Bayesian approach is developed for analysis of data from threshold autoregressive processes with two regimes. Using the conditional likelihood function, the marginal posterior distribution for each of the parameters is derived along with posterior means and variances. A test for linear functions of the autoregressive coefficients is presented. The approach presented uses a posterior p-value averaged over the values of the threshold. The one-step ahead predictive distribution is derived along with the predictive mean and variance. In addition, equivalent results are derived conditional upon a value of the threshold. A numerical example is presented to illustrate the approach.  相似文献   

12.
The main objective of this paper is to develop convenient Bayesian techniques for estimation and forecasting which can be used to analyze multiple (multivariate) autoregressive moving average processes. Based on the conditional likelihood function and the least squares estimates of the residuals, the marginal posterior distribution of the coefficients of the model is approximated by a matrix t distribution, the marginal posterior distribution of the precision matrix is approximated by a Wishart distribution, and the predictive distribution is approximated by a multivariate t distribution. Some numerical examples are given to demonstrate the idea of using the proposed techniques to analyze different types of multiple ARMA models.  相似文献   

13.
This paper examines Bayesian posterior probabilities as a function of selected elements within the set of data, x, when the prior distribution is assumed fixed. The posterior probabilities considered here are those of the parameter vector lying in a subset of the total parameter space. The theorems of this paper provide insight into the effect of elements within x on this posterior probability. These results have applications, for example, in the study of the impact of outliers within the data and in the isolation of misspecified parameters in a model.  相似文献   

14.
A large number of models have been derived from the two-parameter Weibull distribution including the inverse Weibull (IW) model which is found suitable for modeling the complex failure data set. In this paper, we present the Bayesian inference for the mixture of two IW models. For this purpose, the Bayes estimates of the parameters of the mixture model along with their posterior risks using informative as well as the non-informative prior are obtained. These estimates have been attained considering two cases: (a) when the shape parameter is known and (b) when all parameters are unknown. For the former case, Bayes estimates are obtained under three loss functions while for the latter case only the squared error loss function is used. Simulation study is carried out in order to explore numerical aspects of the proposed Bayes estimators. A real-life data set is also presented for both cases, and parameters obtained under case when shape parameter is known are tested through testing of hypothesis procedure.  相似文献   

15.
Abstract

This paper deals with Bayesian estimation and prediction for the inverse Weibull distribution with shape parameter α and scale parameter λ under general progressive censoring. We prove that the posterior conditional density functions of α and λ are both log-concave based on the assumption that λ has a gamma prior distribution and α follows a prior distribution with log-concave density. Then, we present the Gibbs sampling strategy to estimate under squared-error loss any function of the unknown parameter vector (α, λ) and find credible intervals, as well as to obtain prediction intervals for future order statistics. Monte Carlo simulations are given to compare the performance of Bayesian estimators derived via Gibbs sampling with the corresponding maximum likelihood estimators, and a real data analysis is discussed in order to illustrate the proposed procedure. Finally, we extend the developed methodology to other two-parameter distributions, including the Weibull, Burr type XII, and flexible Weibull distributions, and also to general progressive hybrid censoring.  相似文献   

16.
Suppose that just the lower bound of the probability of a measurable subset K in the parameter space Ω is a priori known, when inferences are to be made about measurable subsets A in Ω. Instead of eliciting a unique prior distribution, consider the class Г of all the distributions compatible with such bound. Under mild regularity conditions about the likelihood function, the range of the posterior probability of any A is found, as the prior distribution varies in Г. Such ranges are analysed according to the robust Bayesian viewpoint. Furthermore, some characterising properties of the extended likelihood sets are proved. The prior distributions in Г are then considered as a neighbour class of an elicited prior, comparing likelihood sets and HPD in terms of robustness.  相似文献   

17.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

18.
Consider the problem of inference about a parameter θ in the presence of a nuisance parameter v. In a Bayesian framework, a number of posterior distributions may be of interest, including the joint posterior of (θ, ν), the marginal posterior of θ, and the posterior of θ conditional on different values of ν. The interpretation of these various posteriors is greatly simplified if a transformation (θ, h(θ, ν)) can be found so that θ and h(θ, v) are approximately independent. In this article, we consider a graphical method for finding this independence transformation, motivated by techniques from exploratory data analysis. Some simple examples of the use of this method are given and some of the implications of this approximate independence in a Bayesian analysis are discussed.  相似文献   

19.
We consider an efficient Bayesian approach to estimating integration-based posterior summaries from a separate Bayesian application. In Bayesian quadrature we model an intractable posterior density function f(·) as a Gaussian process, using an approximating function g(·), and find a posterior distribution for the integral of f(·), conditional on a few evaluations of f (·) at selected design points. Bayesian quadrature using normal g (·) is called Bayes-Hermite quadrature. We extend this theory by allowing g(·) to be chosen from two wider classes of functions. One is a family of skew densities and the other is the family of finite mixtures of normal densities. For the family of skew densities we describe an iterative updating procedure to select the most suitable approximation and apply the method to two simulated posterior density functions.  相似文献   

20.
Bayesian analyses often take for granted the assumption that the posterior distribution has at least a first moment. They often include computed or estimated posterior means. In this note, the authors show an example of a Weibull distribution parameter where the theoretical posterior mean fails to exist for commonly used proper semi–conjugate priors. They also show that posterior moments can fail to exist with commonly used noninformative priors including Jeffreys, reference and matching priors, despite the fact that the posteriors are proper. Moreover, within a broad class of priors, the predictive distribution also has no mean. The authors illustrate the problem with a simulated example. Their results demonstrate that the unwitting use of estimated posterior means may yield unjustified conclusions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号