首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 285 毫秒
1.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

2.
李小胜  王申令 《统计研究》2016,33(11):85-92
本文首先构造线性约束条件下的多元线性回归模型的样本似然函数,利用Lagrange法证明其合理性。其次,从似然函数的角度讨论线性约束条件对模型参数的影响,对由传统理论得出的参数估计作出贝叶斯与经验贝叶斯的改进。做贝叶斯改进时,将矩阵正态-Wishart分布作为模型参数和精度阵的联合共轭先验分布,结合构造的似然函数得出参数的后验分布,计算出参数的贝叶斯估计;做经验贝叶斯改进时,将样本分组,从方差的角度讨论由子样得出的参数估计对总样本的参数估计的影响,计算出经验贝叶斯估计。最后,利用Matlab软件生成的随机矩阵做模拟。结果表明,这两种改进后的参数估计均较由传统理论得出的参数估计更精确,拟合结果的误差比更小,可信度更高,在大数据的情况下,这种计算方法的速度更快。  相似文献   

3.
In this paper, we consider the estimation reliability in multicomponent stress-strength (MSS) model when both the stress and strengths are drawn from Topp-Leone (TL) distribution. The maximum likelihood (ML) and Bayesian methods are used in the estimation procedure. Bayesian estimates are obtained by using Lindley’s approximation and Gibbs sampling methods, since they cannot be obtained in explicit form in the context of TL. The asymptotic confidence intervals are constructed based on the ML estimators. The Bayesian credible intervals are also constructed using Gibbs sampling. The reliability estimates are compared via an extensive Monte-Carlo simulation study. Finally, a real data set is analysed for illustrative purposes.  相似文献   

4.
This paper considers the multiple change-point estimation for exponential distribution with truncated and censored data by Gibbs sampling. After all the missing data of interest is filled in by some sampling methods such as rejection sampling method, the complete-data likelihood function is obtained. The full conditional distributions of all parameters are discussed. The means of Gibbs samples are taken as Bayesian estimations of the parameters. The implementation steps of Gibbs sampling are introduced in detail. Finally random simulation test is developed, and the results show that Bayesian estimations are fairly accurate.  相似文献   

5.
In this paper we present a simulation study for comparing differents methods for estimating the prediction error rate in a discrimination problem. We consider the Cross-validation, Bootstrap and Bayesian Bootstrap methods for such as problem, while also elaborating on both simple and Bayesian Bootstrap methods by smoothing techniques. We observe as the smoothing procedure lead to improvements in the estimation of the true error rate of the discrimination rule, specially in the case of the smooth Bayesian Bootstrap estimator, whose reduction in M.S.E. resulted from the high positive correlation between the true error rate and its estimations based in this method.  相似文献   

6.
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.  相似文献   

7.
Generalized linear models (GLMs) with error-in-covariates are useful in epidemiological research due to the ubiquity of non-normal response variables and inaccurate measurements. The link function in GLMs is chosen by the user depending on the type of response variable, frequently the canonical link function. When covariates are measured with error, incorrect inference can be made, compounded by incorrect choice of link function. In this article we propose three flexible approaches for handling error-in-covariates and estimating an unknown link simultaneously. The first approach uses a fully Bayesian (FB) hierarchical framework, treating the unobserved covariate as a latent variable to be integrated over. The second and third are approximate Bayesian approach which use a Laplace approximation to marginalize the variables measured with error out of the likelihood. Our simulation results show support that the FB approach is often a better choice than the approximate Bayesian approaches for adjusting for measurement error, particularly when the measurement error distribution is misspecified. These approaches are demonstrated on an application with binary response.  相似文献   

8.
In this paper we apply empirical likelihood method to the error density estimators in first-order autoregressive models under some mild conditions. The log-likelihood ratio statistic is shown to be asymptotically chi-squared distributed at a fixed point. In simulation, we show that the empirical likelihood produces confidence intervals having theoretical coverage accuracy which is better than normal approximation.  相似文献   

9.
This article applies and investigates a number of logistic ridge regression (RR) parameters that are estimable by using the maximum likelihood (ML) method. By conducting an extensive Monte Carlo study, the performances of ML and logistic RR are investigated in the presence of multicollinearity and under different conditions. The simulation study evaluates a number of methods of estimating the RR parameter k that has recently been developed for use in linear regression analysis. The results from the simulation study show that there is at least one RR estimator that has a lower mean squared error (MSE) than the ML method for all the different evaluated situations.  相似文献   

10.
In this paper, the problem of estimating unknown parameters of a two-parameter Kumaraswamy-Exponential (Kw-E) distribution is considered based on progressively type-II censored sample. The maximum likelihood (ML) estimators of the parameters are obtained. Bayes estimates are also obtained using different loss functions such as squared error, LINEX and general entropy. Lindley's approximation method is used to evaluate these Bayes estimates. Monte Carlo simulation is used for numerical comparison between various estimates developed in this paper.  相似文献   

11.
We propose Bayesian methods with five types of priors to estimate cell probabilities in an incomplete multi-way contingency table under nonignorable nonresponse. In this situation, the maximum likelihood (ML) estimates often fall in the boundary solution, causing the ML estimates to become unstable. To deal with such a multi-way table, we present an EM algorithm which generalizes the previous algorithm used for incomplete one-way tables. Three of the five types of priors were previously introduced while the other two are newly proposed to reflect different response patterns between respondents and nonrespondents. Data analysis and simulation studies show that Bayesian estimates based on the old three priors can be worse than the ML regardless of occurrence of boundary solution, contrary to previous studies. The Bayesian estimates from the two new priors are most preferable when a boundary solution occurs. We provide an illustrating example using data for a study of the relationship between a mother's smoking and her newborn's weight.  相似文献   

12.
In this paper, we consider the estimation of the probability density function and the cumulative distribution function of the inverse Rayleigh distribution. In this regard, the following estimators are considered: uniformly minimum variance unbiased estimator, maximum likelihood (ML) estimator, percentile estimator, least squares estimator and weighted least squares estimator. To do so, analytical expressions are derived for the mean integrated squared error. As the result of simulation studies and real data applications indicate, when the sample size is not very small the ML estimator performs better than the others.  相似文献   

13.
Econometric techniques to estimate output supply systems, factor demand systems and consumer demand systems have often required estimating a nonlinear system of equations that have an additive error structure when written in reduced form. To calculate the ML estimate's covariance matrix of this nonlinear system one can either invert the Hessian of the concentrated log likelihood function, or invert the matrix calculated by pre-multiplying and post multiplying the inverted MLE of the disturbance covariance matrix by the Jacobian of the reduced form model. Malinvaud has shown that the latter of these methods is the actual limiting distribution's covariance matrix, while Barnett has shown that the former is only an approximation.

In this paper, we use a Monte Carlo simulation study to determine how these two covariance matrices differ with respect to the nonlinearity of the model, the number of observations in the dataet, and the residual process. We find that the covariance matrix calculated from the Hessian of the concentrated likelihood function produces Wald statistics that are distributed above those calculated with the other covariance matrix. This difference becomes insignificant as the sample size increases to one-hundred or more observations, suggesting that the asymptotics of the two covariance matrices are quickly reached.  相似文献   

14.
Econometric techniques to estimate output supply systems, factor demand systems and consumer demand systems have often required estimating a nonlinear system of equations that have an additive error structure when written in reduced form. To calculate the ML estimate's covariance matrix of this nonlinear system one can either invert the Hessian of the concentrated log likelihood function, or invert the matrix calculated by pre-multiplying and post multiplying the inverted MLE of the disturbance covariance matrix by the Jacobian of the reduced form model. Malinvaud has shown that the latter of these methods is the actual limiting distribution's covariance matrix, while Barnett has shown that the former is only an approximation.

In this paper, we use a Monte Carlo simulation study to determine how these two covariance matrices differ with respect to the nonlinearity of the model, the number of observations in the dataet, and the residual process. We find that the covariance matrix calculated from the Hessian of the concentrated likelihood function produces Wald statistics that are distributed above those calculated with the other covariance matrix. This difference becomes insignificant as the sample size increases to one-hundred or more observations, suggesting that the asymptotics of the two covariance matrices are quickly reached.  相似文献   

15.
Based on progressively Type-I interval censored sample, the problem of estimating unknown parameters of a two parameter generalized half-normal(GHN) distribution is considered. Different methods of estimation are discussed. They include the maximum likelihood estimation, midpoint approximation method, approximate maximum likelihood estimation, method of moments, and estimation based on probability plot. Several Bayesian estimates with respect to different symmetric and asymmetric loss functions such as squared error, LINEX, and general entropy is calculated. The Lindley’s approximation method is applied to determine Bayesian estimates. Monte Carlo simulations are performed to compare the performances of the different methods. Finally, analysis is also carried out for a real dataset.  相似文献   

16.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

17.
The maximum likelihood (ML) estimation of the location and scale parameters of an exponential distribution based on singly and doubly censored samples is given. When the sample is multiply censored (some middle observations being censored), however, the ML method does not admit explicit solutions. In this case we present a simple approximation to the likelihood equation and derive explicit estimators which are linear functions of order statistics. Finally, we present some examples to illustrate this method of estimation.  相似文献   

18.
The maximum likelihood and Bayesian approaches for parameter estimations and prediction of future record values have been considered for the two-parameter Burr Type XII distribution based on record values with the number of trials following the record values (inter-record times). Firstly, the Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, the Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. Secondly, the Bayes estimates are obtained with respect to a discrete prior for the first shape parameter and a conjugate prior for other shape parameter. The Bayes and the maximum likelihood estimates are compared in terms of the estimated risk by the Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record arising from the Burr Type XII distribution based on record data. The comparison of the derived predictors is carried out by using Monte Carlo simulations. A real data are analysed for illustration purposes.  相似文献   

19.
In this paper, we study the E-Bayesian and hierarchical Bayesian estimations of the parameter derived from Pareto distribution under different loss functions. The definition of the E-Bayesian estimation of the parameter is provided. Moreover, for Pareto distribution, under the condition of the scale parameter is known, based on the different loss functions, formulas of the E-Bayesian estimation and hierarchical Bayesian estimations for the shape parameter are given, respectively, properties of the E-Bayesian estimation – (i) the relationship between of E-Bayesian estimations under different loss functions are provided, (ii) the relationship between of E-Bayesian and hierarchical Bayesian estimations under the same loss function are also provided, and using the Monte Carlo method simulation example is given. Finally, combined with the golfers income data practical problem are calculated, the results show that the proposed method is feasible and convenient for application.  相似文献   

20.
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.?the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号