共查询到20条相似文献,搜索用时 0 毫秒
1.
The Bayesian shrinkage estimation for a measure of dispersion with known mean is studied for the inverse Gaussian distribution. An optimum choice of the shrinkage factor and the properties of the proposed Bayesian shrinkage estimators are being studied. It is shown that these estimators have smaller risk than the usual estimator of the reciprocal measure of dispersion. 相似文献
2.
Let X has a p-dimensional normal distribution with mean vector θ and identity covariance matrix I. In a compound decision problem consisting of squared-error estimation of θ, Strawderman (1971) placed a Beta (α, 1) prior distribution on a normal class of priors to produce a family of Bayes minimax estimators. We propose an incomplete Gamma(α, β) prior distribution on the same normal class of priors to produce a larger family of Bayes minimax estimators. We present the results of a Monte Carlo study to demonstrate the reduced risk of our estimators in comparison with the Strawderman estimators when θ is away from the zero vector. 相似文献
3.
4.
5.
AbstractFor the restricted parameter space (0,1), we propose Zhang’s loss function which satisfies all the 7 properties for a good loss function on (0,1). We then calculate the Bayes rule (estimator), the posterior expectation, the integrated risk, and the Bayes risk of the parameter in (0,1) under Zhang’s loss function. We also calculate the usual Bayes estimator under the squared error loss function, and the Bayes estimator has been proved to underestimate the Bayes estimator under Zhang’s loss function. Finally, the numerical simulations and a real data example of some monthly magazine exposure data exemplify our theoretical studies of two size relationships about the Bayes estimators and the Posterior Expected Zhang’s Losses (PEZLs). 相似文献
6.
Fernando A. Otero Helcio R. Barreto Orlande Gloria L. Frontini 《Journal of applied statistics》2015,42(5):994-1016
In this article, static light scattering (SLS) measurements are processed to estimate the particle size distribution of particle systems incorporating prior information obtained from an alternative experimental technique: scanning electron microscopy (SEM). For this purpose we propose two Bayesian schemes (one parametric and another non-parametric) to solve the stated light scattering problem and take advantage of the obtained results to summarize some features of the Bayesian approach within the context of inverse problems. The features presented in this article include the improvement of the results when some useful prior information from an alternative experiment is considered instead of a non-informative prior as it occurs in a deterministic maximum likelihood estimation. This improvement will be shown in terms of accuracy and precision in the corresponding results and also in terms of minimizing the effect of multiple minima by including significant information in the optimization. Both Bayesian schemes are implemented using Markov Chain Monte Carlo methods. They have been developed on the basis of the Metropolis–Hastings (MH) algorithm using Matlab® and are tested with the analysis of simulated and experimental examples of concentrated and semi-concentrated particles. In the simulated examples, SLS measurements were generated using a rigorous model, while the inversion stage was solved using an approximate model in both schemes and also using the rigorous model in the parametric scheme. Priors from SEM micrographs were also simulated and experimented, where the simulated ones were obtained using a Monte Carlo routine. In addition to the presentation of these features of the Bayesian approach, some other topics will be discussed, such as regularization and some implementation issues of the proposed schemes, among which we remark the selection of the parameters used in the MH algorithm. 相似文献
7.
Ying-Ying Zhang 《统计学通讯:理论与方法》2017,46(14):7125-7133
For the variance parameter of the hierarchical normal and inverse gamma model, we analytically calculate the Bayes rule (estimator) with respect to a prior distribution IG (alpha, beta) under Stein's loss function. This estimator minimizes the posterior expected Stein's loss (PESL). We also analytically calculate the Bayes rule and the PESL under the squared error loss. Finally, the numerical simulations exemplify that the PESLs depend only on alpha and the number of observations. The Bayes rules and PESLs under Stein's loss are unanimously smaller than those under the squared error loss. 相似文献
8.
Drug delivery devices are required to have excellent technical specifications to deliver drugs accurately, and in addition, the devices should provide a satisfactory experience to patients because this can have a direct effect on drug compliance. To compare patients' experience with two devices, cross-over studies with patient-reported outcomes (PRO) as response variables are often used. Because of the strength of cross-over designs, each subject can directly compare the two devices by using the PRO variables, and variables indicating preference (preferring A, preferring B, or no preference) can be easily derived. Traditionally, methods based on frequentist statistics can be used to analyze such preference data, but there are some limitations for the frequentist methods. Recently, Bayesian methods are considered an acceptable method by the US Food and Drug Administration to design and analyze device studies. In this paper, we propose a Bayesian statistical method to analyze the data from preference trials. We demonstrate that the new Bayesian estimator enjoys some optimal properties versus the frequentist estimator. 相似文献
9.
In the problem of selecting the best of k populations, Olkin, Sobel, and Tong (1976) have introduced the idea of estimating the probability of correct selection. In an attempt to improve on their estimator we consider anempirical Bayes approach. We compare the two estimators via analytic results and a simulation study. 相似文献
10.
Longhai Li 《统计学通讯:模拟与计算》2013,42(3):655-667
Problems involving bounded parameter spaces, for example T-minimax and minimax esyimation of bounded parameters, have received much attention in recent years. The existing literature is rich. In this paper we consider T-minimax estimation of a multivariate bounded normal mean by affine rules, and discuss the loss of efficiency due to the use of such rules instead of optimal, unrestricted rules. We also investigate the behavior of 'probability restricted' affine rules, i.e., rules that have a guaranteed large probability of being in the bounded parameter space of the problem. 相似文献
11.
Anurag Pathak Manoj Kumar Sanjay Kumar Singh Umesh Singh 《Journal of applied statistics》2022,49(4):926
This article focuses on the parameter estimation of experimental items/units from Weibull Poisson Model under progressive type-II censoring with binomial removals (PT-II CBRs). The expectation–maximization algorithm has been used for maximum likelihood estimators (MLEs). The MLEs and Bayes estimators have been obtained under symmetric and asymmetric loss functions. Performance of competitive estimators have been studied through their simulated risks. One sample Bayes prediction and expected experiment time have also been studied. Furthermore, through real bladder cancer data set, suitability of considered model and proposed methodology have been illustrated. 相似文献
12.
George E. Kokolakis 《统计学通讯:理论与方法》2013,42(4):927-935
A martingale approach to the problem of performance of Bayesian classifiers with increasing feature dimensionality is applied here . Martingale limit theorems are also used to demonstrate that the expected probability of correct classification tends monotonically to unity for two general classification problems. 相似文献
13.
Zuofeng Shang Murray K. Clayton 《Journal of statistical planning and inference》2011,141(11):3463-3474
Linear models with a growing number of parameters have been widely used in modern statistics. One important problem about this kind of model is the variable selection issue. Bayesian approaches, which provide a stochastic search of informative variables, have gained popularity. In this paper, we will study the asymptotic properties related to Bayesian model selection when the model dimension p is growing with the sample size n. We consider p≤n and provide sufficient conditions under which: (1) with large probability, the posterior probability of the true model (from which samples are drawn) uniformly dominates the posterior probability of any incorrect models; and (2) the posterior probability of the true model converges to one in probability. Both (1) and (2) guarantee that the true model will be selected under a Bayesian framework. We also demonstrate several situations when (1) holds but (2) fails, which illustrates the difference between these two properties. Finally, we generalize our results to include g-priors, and provide simulation examples to illustrate the main results. 相似文献
14.
Alessio Farcomeni Alessandra Nardi Elena Fabrizi 《Journal of applied statistics》2011,38(11):2627-2646
Precarious employment is a serious social problem, especially in those countries, such as Italy, where there are limited benefits from social security. We investigate this phenomenon by analysing the initial part of the career of employees starting with unstable contracts for a panel of Italian workers. Our aim is to estimate the probability of getting a stable job and to detect factors influencing both this probability and the duration of precariousness. To answer these questions, we use an ad hoc mixture cure rate model in a Bayesian framework. 相似文献
15.
Analysing the interevent time distribution to identify seismicity phases: a Bayesian nonparametric approach to the multiple-changepoint problem 总被引:1,自引:0,他引:1
Antonio Pievatolo & Renata Rotondi 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(4):543-562
In the study of earthquakes, several aspects of the underlying physical process, such as the time non-stationarity of the process, are not yet well understood, because we lack clear indications about its evolution in time. Taking as our point of departure the theory that the seismic process evolves in phases with different activity patterns, we have attempted to identify these phases through the variations in the interevent time probability distribution within the framework of the multiple-changepoint problem. In a nonparametric Bayesian setting, the distribution under examination has been considered a random realization from a mixture of Dirichlet processes, the parameter of which is proportional to a generalized gamma distribution. In this way we could avoid making precise assumptions about the functional form of the distribution. The number and location in time of the phases are unknown and are estimated at the same time as the interevent time distributions. We have analysed the sequence of main shocks that occurred in Irpinia, a particularly active area in southern Italy: the method consistently identifies changepoints at times when strong stress releases were recorded. The estimation problem can be solved by stochastic simulation methods based on Markov chains, the implementation of which is improved, in this case, by the good analytical properties of the Dirichlet process. 相似文献
16.
17.
《Journal of Statistical Computation and Simulation》2012,82(12):2456-2478
Finite mixture of regression (FMR) models are aimed at characterizing subpopulation heterogeneity stemming from different sets of covariates that impact different groups in a population. We address the contemporary problem of simultaneously conducting covariate selection and determining the number of mixture components from a Bayesian perspective that can incorporate prior information. We propose a Gibbs sampling algorithm with reversible jump Markov chain Monte Carlo implementation to accomplish concurrent covariate selection and mixture component determination in FMR models. Our Bayesian approach contains innovative features compared to previously developed reversible jump algorithms. In addition, we introduce component-adaptive weighted g priors for regression coefficients, and illustrate their improved performance in covariate selection. Numerical studies show that the Gibbs sampler with reversible jump implementation performs well, and that the proposed weighted priors can be superior to non-adaptive unweighted priors. 相似文献
18.
Agustín Hernández Bastida José María Pérez Sánchez 《Journal of applied statistics》2009,36(8):853-869
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature. 相似文献
19.
This paper is concerned with the problem of constructing a good predictive distribution relative to the Kullback–Leibler information in a linear regression model. The problem is equivalent to the simultaneous estimation of regression coefficients and error variance in terms of a complicated risk, which yields a new challenging issue in a decision-theoretic framework. An estimator of the variance is incorporated here into a loss for estimating the regression coefficients. Several estimators of the variance and of the regression coefficients are proposed and shown to improve on usual benchmark estimators both analytically and numerically. Finally, the prediction problem of a distribution is noted to be related to an information criterion for model selection like the Akaike information criterion (AIC). Thus, several AIC variants are obtained based on proposed and improved estimators and are compared numerically with AIC as model selection procedures. 相似文献
20.
S. Islam S. Anand M. McQueen J. Hamid L. Thabane S. Yusuf 《Journal of applied statistics》2018,45(2):210-224
We have developed a new approach to determine the threshold of a biomarker that maximizes the classification accuracy of a disease. We consider a Bayesian estimation procedure for this purpose and illustrate the method using a real data set. In particular, we determine the threshold for Apolipoprotein B (ApoB), Apolipoprotein A1 (ApoA1) and the ratio for the classification of myocardial infarction (MI). We first conduct a literature review and construct prior distributions. We then develop classification rules based on the posterior distribution of the location and scale parameters for these biomarkers. We identify the threshold for ApoB and ApoA1, and the ratio as 0.908 (gram/liter), 1.138 (gram/liter) and 0.808, respectively. We also observe that the threshold for disease classification varies substantially across different age and ethnic groups. Next, we identify the most informative predictor for MI among the three biomarkers. Based on this analysis, ApoA1 appeared to be a stronger predictor than ApoB for MI classification. Given that we have used this data set for illustration only, the results will require further investigation for use in clinical applications. However, the approach developed in this article can be used to determine the threshold of any continuous biomarker for a binary disease classification. 相似文献