首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this note the problem of nonparametric regression function estimation in a random design regression model with Gaussian errors is considered from the Bayesian perspective. It is assumed that the regression function belongs to a class of functions with a known degree of smoothness. A prior distribution on the given class can be induced by a prior on the coefficients in a series expansion of the regression function through an orthonormal system. The rate of convergence of the resulting posterior distribution is employed to provide a measure of the accuracy of the Bayesian estimation procedure defined by the posterior expected regression function. We show that the Bayes’ estimator achieves the optimal minimax rate of convergence under mean integrated squared error over the involved class of regression functions, thus being comparable to other popular frequentist regression estimators.  相似文献   

2.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

3.
Abstract.  In the Bayesian approach to ill-posed inverse problems, regularization is imposed by specifying a prior distribution on the parameters of interest and Markov chain Monte Carlo samplers are used to extract information about its posterior distribution. The aim of this paper is to investigate the convergence properties of the random-scan random-walk Metropolis (RSM) algorithm for posterior distributions in ill-posed inverse problems. We provide an accessible set of sufficient conditions, in terms of the observational model and the prior, to ensure geometric ergodicity of RSM samplers of the posterior distribution. We illustrate how these conditions can be checked in an application to the inversion of oceanographic tracer data.  相似文献   

4.
This paper presents a Bayesian non-parametric approach to survival analysis based on arbitrarily right censored data. The analysis is based on posterior predictive probabilities using a Polya tree prior distribution on the space of probability measures on [0, ∞). In particular we show that the estimate generalizes the classical Kaplanndash;Meier non-parametric estimator, which is obtained in the limiting case as the weight of prior information tends to zero.  相似文献   

5.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

6.
Most of the Bayesian literature on statistical techniques in auditing has focused on assessing appropriate prior density using parameters such as interest, error rate and the mean of the error amount. Frequently, prior beliefs and mathematical tractable reasons are jointly used to assess prior distributions. As a robust Bayesian approach, we propose to replace the prior distribution with a set of prior distributions compatible with auditor's beliefs. We show how an auditor may draw the behaviour of the posterior error rate, using only partial prior information (quartiles of the prior distribution for the error rate O and, very often, the prior distribution is assumed to be unimodal). An example is pursued in depth.  相似文献   

7.
Prediction limits for Poisson distribution are useful in real life when predicting the occurrences of some phenomena, for example, the number of infections from a disease per year among school children, or the number of hospitalizations per year among patients with cardiovascular disease. In order to allocate the right resources and to estimate the associated cost, one would want to know the worst (i.e., an upper limit) and the best (i.e., the lower limit) scenarios. Under the Poisson distribution, we construct the optimal frequentist and Bayesian prediction limits, and assess frequentist properties of the Bayesian prediction limits. We show that Bayesian upper prediction limit derived from uniform prior distribution and Bayesian lower prediction limit derived from modified Jeffreys non informative prior coincide with their respective frequentist limits. This is not the case for the Bayesian lower prediction limit derived from a uniform prior and the Bayesian upper prediction limit derived from a modified Jeffreys prior distribution. Furthermore, it is shown that not all Bayesian prediction limits derived from a proper prior can be interpreted in a frequentist context. Using a counterexample, we state a sufficient condition and show that Bayesian prediction limits derived from proper priors satisfying our condition cannot be interpreted in a frequentist context. Analysis of simulated data and data on Atlantic tropical storm occurrences are presented.  相似文献   

8.
In the life test, predicting higher failure times than the largest failure time of the observed is an important issue. Although the Rayleigh distribution is a suitable model for analyzing the lifetime of components that age rapidly over time because its failure rate function is an increasing linear function of time, the inference for a two-parameter Rayleigh distribution based on upper record values has not been addressed from the Bayesian perspective. This paper provides Bayesian analysis methods by proposing a noninformative prior distribution to analyze survival data, using a two-parameter Rayleigh distribution based on record values. In addition, we provide a pivotal quantity and an algorithm based on the pivotal quantity to predict the behavior of future survival records. We show that the proposed method is superior to the frequentist counterpart in terms of the mean-squared error and bias through Monte carlo simulations. For illustrative purposes, survival data on lung cancer patients are analyzed, and it is proved that the proposed model can be a good alternative when prior information is not given.  相似文献   

9.
We investigate the posterior rate of convergence for wavelet shrinkage using a Bayesian approach in general Besov spaces. Instead of studying the Bayesian estimator related to a particular loss function, we focus on the posterior distribution itself from a nonparametric Bayesian asymptotics point of view and study its rate of convergence. We obtain the same rate as in Abramovich et al. (2004) where the authors studied the convergence of several Bayesian estimators.  相似文献   

10.
We propose a fully Bayesian model with a non-informative prior for analyzing misclassified binary data with a validation substudy. In addition, we derive a closed-form algorithm for drawing all parameters from the posterior distribution and making statistical inference on odds ratios. Our algorithm draws each parameter from a beta distribution, avoids the specification of initial values, and does not have convergence issues. We apply the algorithm to a data set and compare the results with those obtained by other methods. Finally, the performance of our algorithm is assessed using simulation studies.  相似文献   

11.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

12.
Within the context of non-parametric Bayesian inference, Dykstra and Laud (1981) define an extended gamma (EG) process and use it as a prior on increasing hazard rates. The attractive features of the extended gamma (EG) process, among them its capability to index distribution functions that are absolutely continuous, are offset by the intractable nature of the computation that needs to be performed. Sampling based approaches such as the Gibbs Sampler can alleviate these difficulties but the EG processes then give rise to the problem of efficient random variate generation from a class of distributions called D-distributions. In this paper, we describe a novel technique for sampling from such distributions, thereby providing an efficient computation procedure for non-parametric Bayesian inference with a rich class of priors for hazard rates.  相似文献   

13.
Rates of convergence of Bayesian nonparametric procedures are expressed as the maximum between two rates: one is determined via suitable measures of concentration of the prior around the “true” density f0, and the other is related to the way the mass is spread outside a neighborhood of f0. Here we provide a lower bound for the former in terms of the usual notion of prior concentration and in terms of an alternative definition of prior concentration. Moreover, we determine the latter for two important classes of priors: the infinite–dimensional exponential family, and the Pólya trees.  相似文献   

14.
Abstract.  Hazard rate estimation is an alternative to density estimation for positive variables that is of interest when variables are times to event. In particular, it is here shown that hazard rate estimation is useful for seismic hazard assessment. This paper suggests a simple, but flexible, Bayesian method for non-parametric hazard rate estimation, based on building the prior hazard rate as the convolution mixture of a Gaussian kernel with an exponential jump-size compound Poisson process. Conditions are given for a compound Poisson process prior to be well-defined and to select smooth hazard rates, an elicitation procedure is devised to assign a constant prior expected hazard rate while controlling prior variability, and a Markov chain Monte Carlo approximation of the posterior distribution is obtained. Finally, the suggested method is validated in a simulation study, and some Italian seismic event data are analysed.  相似文献   

15.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.  相似文献   

16.
S. Huet 《Statistics》2015,49(2):239-266
We propose a procedure to test that the expectation of a Gaussian vector is linear against a nonparametric alternative. We consider the case where the covariance matrix of the observations has a block diagonal structure. This framework encompasses regression models with autocorrelated errors, heteroscedastic regression models, mixed-effects models and growth curves. Our procedure does not depend on any prior information about the alternative. We prove that the test is asymptotically of the nominal level and consistent. We characterize the set of vectors on which the test is powerful and prove the classical √log log (n)/n convergence rate over directional alternatives. We propose a bootstrap version of the test as an alternative to the initial one and provide a simulation study in order to evaluate both procedures for small sample sizes when the purpose is to test goodness of fit in a Gaussian mixed-effects model. Finally, we illustrate the procedures using a real data set.  相似文献   

17.
Abstract.  We consider estimation of the upper boundary point F −1 (1) of a distribution function F with finite upper boundary or 'frontier' in deconvolution problems, primarily focusing on deconvolution models where the noise density is decreasing on the positive halfline. Our estimates are based on the (non-parametric) maximum likelihood estimator (MLE) of F . We show that (1) is asymptotically never too small. If the convolution kernel has bounded support the estimator (1) can generally be expected to be consistent. In this case, we establish a relation between the extreme value index of F and the rate of convergence of (1) to the upper support point for the 'boxcar' deconvolution model. If the convolution density has unbounded support, (1) can be expected to overestimate the upper support point. We define consistent estimators , for appropriately chosen vanishing sequences ( β n ) and study these in a particular case.  相似文献   

18.
Abstract.  We study a semiparametric generalized additive coefficient model (GACM), in which linear predictors in the conventional generalized linear models are generalized to unknown functions depending on certain covariates, and approximate the non-parametric functions by using polynomial spline. The asymptotic expansion with optimal rates of convergence for the estimators of the non-parametric part is established. Semiparametric generalized likelihood ratio test is also proposed to check if a non-parametric coefficient can be simplified as a parametric one. A conditional bootstrap version is suggested to approximate the distribution of the test under the null hypothesis. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed methods. We further apply the proposed model and methods to a data set from a human visceral Leishmaniasis study conducted in Brazil from 1994 to 1997. Numerical results outperform the traditional generalized linear model and the proposed GACM is preferable.  相似文献   

19.
Implementation of a full Bayesian non-parametric analysis involving neutral to the right processes (apart from the special case of the Dirichlet process) has been difficult for two reasons: first, the posterior distributions are complex and therefore only Bayes estimates (posterior expectations) have previously been presented; secondly, it is difficult to obtain an interpretation for the parameters of a neutral to the right process. In this paper we extend Ferguson & Phadia (1979) by presenting a general method for specifying the prior mean and variance of a neutral to the right process, providing the interpretation of the parameters. Additionally, we provide the basis for a full Bayesian analysis, via simulation, from the posterior process using a hybrid of new algorithms that is applicable to a large class of neutral to the right processes (Ferguson & Phadia only provide posterior means). The ideas are exemplified through illustrative analyses.  相似文献   

20.
Consistency of Bernstein polynomial posteriors   总被引:1,自引:0,他引:1  
A Bernstein prior is a probability measure on the space of all the distribution functions on [0, 1]. Under very general assumptions, it selects absolutely continuous distribution functions, whose densities are mixtures of known beta densities. The Bernstein prior is of interest in Bayesian nonparametric inference with continuous data. We study the consistency of the posterior from a Bernstein prior. We first show that, under mild assumptions, the posterior is weakly consistent for any distribution function P 0 on [0, 1] with continuous and bounded Lebesgue density. With slightly stronger assumptions on the prior, the posterior is also Hellinger consistent. This implies that the predictive density from a Bernstein prior, which is a Bayesian density estimate, converges in the Hellinger sense to the true density (assuming that it is continuous and bounded). We also study a sieve maximum likelihood version of the density estimator and show that it is also Hellinger consistent under weak assumptions. When the order of the Bernstein polynomial, i.e. the number of components in the beta distribution mixture, is truncated, we show that under mild restrictions the posterior concentrates on the set of pseudotrue densities. Finally, we study the behaviour of the predictive density numerically and we also study a hybrid Bayes–maximum likelihood density estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号