首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   

2.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

3.
For a normal model with a conjugate prior, we provide an in-depth examination of the effects of the hyperparameters on the long-run frequentist properties of posterior point and interval estimates. Under an assumed sampling model for the data-generating mechanism, we examine how hyperparameter values affect the mean-squared error (MSE) of posterior means and the true coverage of credible intervals. We develop two types of hyperparameter optimality. MSE optimal hyperparameters minimize the MSE of posterior point estimates. Credible interval optimal hyperparameters result in credible intervals that have a minimum length while still retaining nominal coverage. A poor choice of hyperparameters has a worse consequence on the credible interval coverage than on the MSE of posterior point estimates. We give an example to demonstrate how our results can be used to evaluate the potential consequences of hyperparameter choices.  相似文献   

4.
Summary. We use cumulants to derive Bayesian credible intervals for wavelet regression estimates. The first four cumulants of the posterior distribution of the estimates are expressed in terms of the observed data and integer powers of the mother wavelet functions. These powers are closely approximated by linear combinations of wavelet scaling functions at an appropriate finer scale. Hence, a suitable modification of the discrete wavelet transform allows the posterior cumulants to be found efficiently for any given data set. Johnson transformations then yield the credible intervals themselves. Simulations show that these intervals have good coverage rates, even when the underlying function is inhomogeneous, where standard methods fail. In the case where the curve is smooth, the performance of our intervals remains competitive with established nonparametric regression methods.  相似文献   

5.
In this paper, we argue that replacing the expectation of the loss in statistical decision theory with the median of the loss leads to a viable and useful alternative to conventional risk minimization particularly because it can be used with heavy tailed distributions. We investigate three possible definitions for such medloss estimators and derive examples of them in several standard settings. We argue that the medloss definition based on the posterior distribution is better than the other two definitions that do not permit optimization over large classes of estimators. We argue that median loss minimizing estimates often yield improved performance, have resistance to outliers as high as the usual robust estimates, and are resistant to the specific loss used to form them. In simulations with the posterior medloss formulation, we show how the estimates can be obtained numerically and that they can have better robustness properties than estimates derived from risk minimization.  相似文献   

6.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   

7.

Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first-order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

  相似文献   

8.
Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares.  相似文献   

9.
The term ‘small area’ or ‘small domain’ is commonly used to denote a small geographical area that has a small subpopulation of people within a large area. Small area estimation is an important area in survey sampling because of the growing demand for better statistical inference for small areas in public or private surveys. In small area estimation problems the focus is on how to borrow strength across areas in order to develop a reliable estimator and which makes use of available auxiliary information. Some traditional methods for small area problems such as empirical best linear unbiased prediction borrow strength through linear models that provide links to related areas, which may not be appropriate for some survey data. In this article, we propose a stepwise Bayes approach which borrows strength through an objective posterior distribution. This approach results in a generalized constrained Dirichlet posterior estimator when auxiliary information is available for small areas. The objective posterior distribution is based only on the assumption of exchangeability across related areas and does not make any explicit model assumptions. The form of our posterior distribution allows us to assign a weight to each member of the sample. These weights can then be used in a straight forward fashion to make inferences about the small area means. Theoretically, the stepwise Bayes character of the posterior allows one to prove the admissibility of the point estimators suggesting that inferential procedures based on this approach will tend to have good frequentist properties. Numerically, we demonstrate in simulations that the proposed stepwise Bayes approach can have substantial strengths compared to traditional methods.  相似文献   

10.
In this paper, we propose a new Bayesian inference approach for classification based on the traditional hinge loss used for classical support vector machines, which we call the Bayesian Additive Machine (BAM). Unlike existing approaches, the new model has a semiparametric discriminant function where some feature effects are nonlinear and others are linear. This separation of features is achieved automatically during model fitting without user pre-specification. Following the literature on sparse regression of high-dimensional models, we can also identify the irrelevant features. By introducing spike-and-slab priors using two sets of indicator variables, these multiple goals are achieved simultaneously and automatically, without any parameter tuning such as cross-validation. An efficient partially collapsed Markov chain Monte Carlo algorithm is developed for posterior exploration based on a data augmentation scheme for the hinge loss. Our simulations and three real data examples demonstrate that the new approach is a strong competitor to some approaches that were proposed recently for dealing with challenging classification examples with high dimensionality.  相似文献   

11.
With reference to the problem of estimating the mixing proportions in a finite mixture distribution with known components, employing Dirichlet prior, closed form expressions for the posterior means and variances are obtained. To avoid the difficulties in computing the estimates, an approximation procedure is introduced. Numerical studies carried out for normal mixtures indicate the closeness of the approximations and their superiority over the maximum likelihood estimates at least in the case of small samples.  相似文献   

12.
For noninformative nonparametric estimation of finite population quantiles under simple random sampling, estimation based on the Polya posterior is similar to estimation based on the Bayesian approach developed by Ericson (J. Roy. Statist. Soc. Ser. B 31 (1969) 195) in that the Polya posterior distribution is the limit of Ericson's posterior distributions as the weight placed on the prior distribution diminishes. Furthermore, Polya posterior quantile estimates can be shown to be admissible under certain conditions. We demonstrate the admissibility of the sample median as an estimate of the population median under such a set of conditions. As with Ericson's Bayesian approach, Polya posterior-based interval estimates for population quantiles are asymptotically equivalent to the interval estimates obtained from standard frequentist approaches. In addition, for small to moderate sized populations, Polya posterior-based interval estimates for quantiles of a continuous characteristic of interest tend to agree with the standard frequentist interval estimates.  相似文献   

13.
In numerous situations, we use ranks dataset to exhibit preferences of a group of respondents towards a set of items. While assigning ranks, judges may consider several factors contributing to overall ranks of items. In this study, an attempt is made to model factors influencing the judges’ evaluations of items through mixture models for preference datasets. Both the probabilistic features of the mixture distribution and inferential as well as computational issues emerging out of the maximum likelihood estimation are addressed. Moreover, empirical evidences from observed dataset confirming the plausibility of the proposed model to preference dataset are provided.  相似文献   

14.
The data that are used in constructing empirical Bayes estimates can properly be regarded as arising in a two-stage sampling scheme. In this setting it is possible to modify the conventional parameter estimates so that a reduction in expected squared error is effected. In the empirical Bayes approach this is done through the use of Bayes's theorem. The alternative approach proposed in this paper specifies a class of modified estimates and then seeks to identify that member of the class which yields the minimum squared error. One advantage of this approach relative to the empirical Bayes approach is that certain problems involving multiple parameters are easily overcome. Further, it permits the use of relatively efficient methods of non-parametric estimation, such as those based on quantiles or ranks; this has not been achieved by empirical Bayes methods.  相似文献   

15.
Implementation of a full Bayesian non-parametric analysis involving neutral to the right processes (apart from the special case of the Dirichlet process) has been difficult for two reasons: first, the posterior distributions are complex and therefore only Bayes estimates (posterior expectations) have previously been presented; secondly, it is difficult to obtain an interpretation for the parameters of a neutral to the right process. In this paper we extend Ferguson & Phadia (1979) by presenting a general method for specifying the prior mean and variance of a neutral to the right process, providing the interpretation of the parameters. Additionally, we provide the basis for a full Bayesian analysis, via simulation, from the posterior process using a hybrid of new algorithms that is applicable to a large class of neutral to the right processes (Ferguson & Phadia only provide posterior means). The ideas are exemplified through illustrative analyses.  相似文献   

16.
In this paper, the Bayesian approach is applied to the estimation problem in the case of step stress partially accelerated life tests with two stress levels and type-I censoring. Gompertz distribution is considered as a lifetime model. The posterior means and posterior variances are derived using the squared-error loss function. The Bayes estimates cannot be obtained in explicit forms. Approximate Bayes estimates are computed using the method of Lindley [D.V. Lindley, Approximate Bayesian methods, Trabajos Estadistica 31 (1980), pp. 223–237]. The advantage of this proposed method is shown. The approximate Bayes estimates obtained under the assumption of non-informative priors are compared with their maximum likelihood counterparts using Monte Carlo simulation.  相似文献   

17.
In this article, we consider the multiple step stress model based on the cumulative exposure model assumption. Here, it is assumed that for a given stress level, the lifetime of the experimental units follows exponential distribution and the expected lifetime decreases as the stress level increases. We mainly focus on the order restricted inference of the unknown parameters of the lifetime distributions. First we consider the order restricted maximum likelihood estimators (MLEs) of the model parameters. It is well known that the order restricted MLEs cannot be obtained in explicit forms. We propose an algorithm that stops in finite number of steps and it provides the MLEs. We further consider the Bayes estimates and the associated credible intervals under the squared error loss function. Due to the absence of explicit form of the Bayes estimates, we propose to use the importance sampling technique to compute Bayes estimates. We provide an extensive simulation study in case of three stress levels mainly to see the performance of the proposed methods. Finally the analysis of one real data set has been provided for illustrative purposes.  相似文献   

18.
In this article, a Bayesian approach is proposed for the estimation of log odds ratios and intraclass correlations over a two-way contingency table, including intraclass correlated cells. Required likelihood functions of log odds ratios are obtained, and determination of prior structures is discussed. Hypothesis testing for log odds ratios and intraclass correlations by using the posterior simulations is outlined. Because the proposed approach includes no asymptotic theory, it is useful for the estimation and hypothesis testing of log odds ratios in the presence of certain intraclass correlation patterns. A family health status and limitations data set is analyzed by using the proposed approach in order to figure out the impact of intraclass correlations on the estimates and hypothesis tests of log odds ratios. Although intraclass correlations are small in the data set, we obtain that even small intraclass correlations can significantly affect the estimates and test results, and our approach is useful for the estimation and testing of log odds ratios in the presence of intraclass correlations.  相似文献   

19.
We develop and apply an approach to the spatial interpolation of a vector-valued random response field. The Bayesian approach we adopt enables uncertainty about the underlying models to be représentés in expressing the accuracy of the resulting interpolants. The methodology is particularly relevant in environmetrics, where vector-valued responses are only observed at designated sites at successive time points. The theory allows space-time modelling at the second level of the hierarchical prior model so that uncertainty about the model parameters has been fully expressed at the first level. In this way, we avoid unduly optimistic estimates of inferential accuracy. Moreover, the prior model can be upgraded with any available new data, while past data can be used in a systematic way to fit model parameters. The theory is based on the multivariate normal and related joint distributions. Our hierarchical prior models lead to posterior distributions which are robust with respect to the choice of the prior (hyperparameters). We illustrate our theory with an example involving monitoring stations in southern Ontario, where monthly average levels of ozone, sulphate, and nitrate are available and between-station response triplets are interpolated. In this example we use a recently developed method for interpolating spatial correlation fields.  相似文献   

20.
Tmax is the time associated with the maximum serum or plasma drug concentration achieved following a dose. While Tmax is continuous in theory, it is usually discrete in practice because it is equated to a nominal sampling time in the noncompartmental pharmacokinetics approach. For a 2-treatment crossover design, a Hodges-Lehmann method exists for a confidence interval on treatment differences. For appropriately designed crossover studies with more than two treatments, a new median-scaling method is proposed to obtain estimates and confidence intervals for treatment effects. A simulation study was done comparing this new method with two previously described rank-based nonparametric methods, a stratified ranks method and a signed ranks method due to Ohrvik. The Normal theory, a nonparametric confidence interval approach without adjustment for periods, and a nonparametric bootstrap method were also compared. Results show that less dense sampling and period effects cause increases in confidence interval length. The Normal theory method can be liberal (i.e. less than nominal coverage) if there is a true treatment effect. The nonparametric methods tend to be conservative with regard to coverage probability and among them the median-scaling method is least conservative and has shortest confidence intervals. The stratified ranks method was the most conservative and had very long confidence intervals. The bootstrap method was generally less conservative than the median-scaling method, but it tended to have longer confidence intervals. Overall, the median-scaling method had the best combination of coverage and confidence interval length. All methods performed adequately with respect to bias.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号