首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

2.
In this paper, we develop a matching prior for the product of means in several normal distributions with unrestricted means and unknown variances. For this problem, properly assigning priors for the product of normal means has been issued because of the presence of nuisance parameters. Matching priors, which are priors matching the posterior probabilities of certain regions with their frequentist coverage probabilities, are commonly used but difficult to derive in this problem. We developed the first order probability matching priors for this problem; however, the developed matching priors are unproper. Thus, we apply an alternative method and derive a matching prior based on a modification of the profile likelihood. Simulation studies show that the derived matching prior performs better than the uniform prior and Jeffreys’ prior in meeting the target coverage probabilities, and meets well the target coverage probabilities even for the small sample sizes. In addition, to evaluate the validity of the proposed matching prior, Bayesian credible interval for the product of normal means using the matching prior is compared to Bayesian credible intervals using the uniform prior and Jeffrey’s prior, and the confidence interval using the method of Yfantis and Flatman.  相似文献   

3.
There exist various methods for providing confidence intervals for unknown parameters of interest on the basis of a random sample. Generally, the bounds are derived from a system of non-linear equations. In this article, we present a general solution to obtain an unbiased confidence interval with confidence coefficient 1 ? α in one-parameter exponential families. Also we discuss two Bayesian credible intervals, the highest posterior density (HPD) and relative surprise (RS) credible intervals. Standard criteria like the coverage length and coverage probability are used to assess the performance of the HPD and RS credible intervals. Simulation studies and real data applications are presented for illustrative purposes.  相似文献   

4.
Summary. We use cumulants to derive Bayesian credible intervals for wavelet regression estimates. The first four cumulants of the posterior distribution of the estimates are expressed in terms of the observed data and integer powers of the mother wavelet functions. These powers are closely approximated by linear combinations of wavelet scaling functions at an appropriate finer scale. Hence, a suitable modification of the discrete wavelet transform allows the posterior cumulants to be found efficiently for any given data set. Johnson transformations then yield the credible intervals themselves. Simulations show that these intervals have good coverage rates, even when the underlying function is inhomogeneous, where standard methods fail. In the case where the curve is smooth, the performance of our intervals remains competitive with established nonparametric regression methods.  相似文献   

5.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

6.
Recently, Rayleigh distribution has received considerable attention in the statistical literature. In this article, we consider the point and interval estimation of the functions of the unknown parameters of a two-parameter Rayleigh distribution. First, we obtain the maximum likelihood estimators (MLEs) of the unknown parameters. The MLEs cannot be obtained in explicit forms, and we propose to use the maximization of the profile log-likelihood function to compute the MLEs. We further consider the Bayesian inference of the unknown parameters. The Bayes’ estimates and the associated credible intervals cannot be obtained in closed forms. We use the importance sampling technique to approximate (compute) the Bayes’ estimates and the associated credible intervals. For comparison purposes, we have also used the exact method to compute the Bayes’ estimates and the corresponding credible intervals. Monte Carlo simulations are performed to compare the performances of the proposed method, and one dataset has been analyzed for illustrative purposes. We further consider the Bayes’ prediction problem based on the observed samples, and provide the appropriate predictive intervals. A data example has been provided for illustrative purposes.  相似文献   

7.
The Maxwell (or Maxwell–Boltzmann) distribution was invented to solve the problems relating to physics and chemistry. It has also proved its strength of analysing the lifetime data. For this distribution, we consider point and interval estimation procedures in the presence of type-I progressively hybrid censored data. We obtain maximum likelihood estimator of the parameter and provide asymptotic and bootstrap confidence intervals of it. The Bayes estimates and Bayesian credible and highest posterior density intervals are obtained using inverted gamma prior. The expression of the expected number of failures in life testing experiment is also derived. The results are illustrated through the simulation study and analysis of a real data set is presented.  相似文献   

8.
This paper presents the result of a study of the robustness of posterior estimators of the factor loading matrix, the factor scores, and the disturbance covariance matrix (the main model parameters) in a Bayesian factor analysis with respect to variations in the values of the parameters of their prior distributions (the hyperparameter). We adopt the ε - contamination model of Berger and Berliner(1986) to generate prior distributions whose hyper-paramters reflects small variations in the elements of the uncontaminated hyperparameters, and we use directional derivatives to examine the variation of the uncontaminated estimators with respect to changes in the values of the hyperparameters, in the directions of the main model parameters. Several matrix norms are used to measure the closeness of the resulting values. We illustrate the results with a numerical example.  相似文献   

9.
In this article, the Brier score is used to investigate the importance of clustering for the frailty survival model. For this purpose, two versions of the Brier score are constructed, i.e., a “conditional Brier score” and a “marginal Brier score.” Both versions of the Brier score show how the clustering effects and the covariate effects affect the predictive ability of the frailty model separately. Using a Bayesian and a likelihood approach, point estimates and 95% credible/confidence intervals are computed. The estimation properties of both procedures are evaluated in an extensive simulation study for both versions of the Brier score. Further, a validation strategy is developed to calculate an internally validated point estimate and credible/confidence interval. The ensemble of the developments is applied to a dental dataset.  相似文献   

10.
In Bayesian analysis, people usually report the highest posterior density (HPD) credible interval as an interval estimate of an unknown parameter. However, when the unknown parameter is the nonnegative normal mean, the Bayesian HPD credible interval under the uniform prior has quite a low minimum frequentist coverage probability. To enhance the minimum frequentist coverage probability of a credible interval, I propose a new method of reporting the Bayesian credible interval. Numerical results show that the new reported credible interval has a much higher minimum frequentist coverage probability than the HPD credible interval.  相似文献   

11.
In this paper, we consider the problem of making statistical inference for a truncated normal distribution under progressive type I interval censoring. We obtain maximum likelihood estimators of unknown parameters using the expectation-maximization algorithm and in sequel, we also compute corresponding midpoint estimates of parameters. Estimation based on the probability plot method is also considered. Asymptotic confidence intervals of unknown parameters are constructed based on the observed Fisher information matrix. We obtain Bayes estimators of parameters with respect to informative and non-informative prior distributions under squared error and linex loss functions. We compute these estimates using the importance sampling procedure. The highest posterior density intervals of unknown parameters are constructed as well. We present a Monte Carlo simulation study to compare the performance of proposed point and interval estimators. Analysis of a real data set is also performed for illustration purposes. Finally, inspection times and optimal censoring plans based on the expected Fisher information matrix are discussed.  相似文献   

12.
This paper is an effort to obtain Bayes estimators of Rayleigh parameter and its associated risk based on a conjugate prior (square root inverted gamma prior) with respect to both symmetric loss function (squared error loss), and asymmetric loss function (precautionary loss function). We also derive the highest posterior density (HPD) interval for the Rayleigh parameter as well as the HPD prediction intervals for a future observation from this distribution. An illustrative example to test how the Rayleigh distribution fits a real data set is presented. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different conditions.  相似文献   

13.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

14.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes.  相似文献   

15.
We consider the classic problem of interval estimation of a proportion p based on binomial sampling. The ‘exact’ Clopper–Pearson confidence interval for p is known to be unnecessarily conservative. We propose coverage adjustments of the Clopper–Pearson interval that incorporate prior or posterior beliefs into the interval. Using heatmap‐type plots for comparing confidence intervals, we show that the coverage‐adjusted intervals have satisfying coverage and shorter expected lengths than competing intervals found in the literature.  相似文献   

16.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice.  相似文献   

17.
For the hierarchical Poisson and gamma model, we calculate the Bayes posterior estimator of the parameter of the Poisson distribution under Stein's loss function which penalizes gross overestimation and gross underestimation equally and the corresponding Posterior Expected Stein's Loss (PESL). We also obtain the Bayes posterior estimator of the parameter under the squared error loss and the corresponding PESL. Moreover, we obtain the empirical Bayes estimators of the parameter of the Poisson distribution with a conjugate gamma prior by two methods. In numerical simulations, we have illustrated: The two inequalities of the Bayes posterior estimators and the PESLs; the moment estimators and the Maximum Likelihood Estimators (MLEs) are consistent estimators of the hyperparameters; the goodness-of-fit of the model to the simulated data. The numerical results indicate that the MLEs are better than the moment estimators when estimating the hyperparameters. Finally, we exploit the attendance data on 314 high school juniors from two urban high schools to illustrate our theoretical studies.  相似文献   

18.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

19.
Approximate Bayesian computation (ABC) is an approach to sampling from an approximate posterior distribution in the presence of a computationally intractable likelihood function. A common implementation is based on simulating model, parameter and dataset triples from the prior, and then accepting as samples from the approximate posterior, those model and parameter pairs for which the corresponding dataset, or a summary of that dataset, is ‘close’ to the observed data. Closeness is typically determined though a distance measure and a kernel scale parameter. Appropriate choice of that parameter is important in producing a good quality approximation. This paper proposes diagnostic tools for the choice of the kernel scale parameter based on assessing the coverage property, which asserts that credible intervals have the correct coverage levels in appropriately designed simulation settings. We provide theoretical results on coverage for both model and parameter inference, and adapt these into diagnostics for the ABC context. We re‐analyse a study on human demographic history to determine whether the adopted posterior approximation was appropriate. Code implementing the proposed methodology is freely available in the R package abctools .  相似文献   

20.
The paper develops some objective priors for correlation coefficient of the bivariate normal distribution. The criterion used is the asymptotic matching of coverage probabilities of Bayesian credible intervals with the corresponding frequentist coverage probabilities. The paper uses various matching criteria, namely, quantile matching, highest posterior density matching, and matching via inversion of test statistics. Each matching criterion leads to a different prior for the parameter of interest. We evaluate their performance by comparing credible intervals through simulation studies. In addition, inference through several likelihood-based methods have been discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号