首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article deals with Bayesian inference and prediction for M/G/1 queueing systems. The general service time density is approximated with a class of Erlang mixtures which are phase-type distributions. Given this phase-type approximation, an explicit evaluation of measures such as the stationary queue size, waiting time and busy period distributions can be obtained. Given arrival and service data, a Bayesian procedure based on reversible jump Markov Chain Monte Carlo methods is proposed to estimate system parameters and predictive distributions.  相似文献   

2.

In time series analysis, signal extraction model (SEM) is used to estimate unobserved signal component from observed time series data. Since parameters of the components in SEM are often unknown in practice, a commonly used method is to estimate unobserved signal component using the maximum likelihood estimates (MLEs) of parameters of the components. This paper explores an alternative way to estimate unobserved signal component when parameters of the components are unknown. The suggested method makes use of importance sampling (IS) with Bayesian inference. The basic idea is to treat parameters of the components in SEM as a random vector and compute a posterior probability density function of the parameters using Bayesian inference. Then IS method is applied to integrate out the parameters and thus estimates of unobserved signal component, unconditional to the parameters, can be obtained. This method is illustrated with a real time series data. Then a Monte Carlo study with four different types of time series models is carried out to compare a performance of this method with that of a commonly used method. The study shows that IS method with Bayesian inference is computationally feasible and robust, and more efficient in terms of mean square errors (MSEs) than a commonly used method.  相似文献   

3.
During recent years, analysts have been relying on approximate methods of inference to estimate multilevel models for binary or count data. In an earlier study of random-intercept models for binary outcomes we used simulated data to demonstrate that one such approximation, known as marginal quasi-likelihood, leads to a substantial attenuation bias in the estimates of both fixed and random effects whenever the random effects are non-trivial. In this paper, we fit three-level random-intercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized quasi-likelihood, also underestimate the underlying parameters. The extent of the bias is assessed by two standards of comparison: exact maximum likelihood estimates, based on a Gauss–Hermite numerical quadrature procedure, and a set of Bayesian estimates, obtained from Gibbs sampling with diffuse priors. We also examine the effectiveness of a parametric bootstrap procedure for reducing the bias. The results indicate that second-order penalized quasi-likelihood estimates provide a considerable improvement over the other approximations, but all the methods of approximate inference result in a substantial underestimation of the fixed and random effects when the random effects are sizable. We also find that the parametric bootstrap method can eliminate the bias but is computationally very intensive.  相似文献   

4.
The lognormal distribution is currently used extensively to describe the distribution of positive random variables. This is especially the case with data pertaining to occupational health and other biological data. One particular application of the data is statistical inference with regards to the mean of the data. Other authors, namely Zou et al. (2009), have proposed procedures involving the so-called “method of variance estimates recovery” (MOVER), while an alternative approach based on simulation is the so-called generalized confidence interval, discussed by Krishnamoorthy and Mathew (2003). In this paper we compare the performance of the MOVER-based confidence interval estimates and the generalized confidence interval procedure to coverage of credibility intervals obtained using Bayesian methodology using a variety of different prior distributions to estimate the appropriateness of each. An extensive simulation study is conducted to evaluate the coverage accuracy and interval width of the proposed methods. For the Bayesian approach both the equal-tail and highest posterior density (HPD) credibility intervals are presented. Various prior distributions (Independence Jeffreys' prior, Jeffreys'-Rule prior, namely, the square root of the determinant of the Fisher Information matrix, reference and probability-matching priors) are evaluated and compared to determine which give the best coverage with the most efficient interval width. The simulation studies show that the constructed Bayesian confidence intervals have satisfying coverage probabilities and in some cases outperform the MOVER and generalized confidence interval results. The Bayesian inference procedures (hypothesis tests and confidence intervals) are also extended to the difference between two lognormal means as well as to the case of zero-valued observations and confidence intervals for the lognormal variance. In the last section of this paper the bivariate lognormal distribution is discussed and Bayesian confidence intervals are obtained for the difference between two correlated lognormal means as well as for the ratio of lognormal variances, using nine different priors.  相似文献   

5.
A Bayesian nonparametric estimate of the survival distribution is derived under a particular sampling scheme for grouped data that includes the possibility of censoring. The estimate uses the prior information to smooth the data, giving an estimate which is continuous. As special cases survival estimates for life tables are obtained and the estimate of Susarla and Van Ryzin (1976) is derived. As the weight of the prior information tends to zero, the Bayesian estimate reduces to a continuous version of the nonparametric maximum-likelihood estimate. An empirical Bayes modification of the procedure is illustrated on a data set from Cutler and Ederer (1958).  相似文献   

6.
This paper presents a Bayesian solution to the problem of time series forecasting, for the case in which the generating process is an autoregressive of order one, with a normal random coefficient. The proposed procedure is based on the predictive density of the future observation. Conjugate priors are used for some parameters, while improper vague priors are used for others.  相似文献   

7.
The two-parameter generalized exponential (GE) distribution was introduced by Gupta and Kundu [Gupta, R.D. and Kundu, D., 1999, Generalized exponential distribution. Australian and New Zealand Journal of Statistics, 41(2), 173–188.]. It was observed that the GE can be used in situations where a skewed distribution for a nonnegative random variable is needed. In this article, the Bayesian estimation and prediction for the GE distribution, using informative priors, have been considered. Importance sampling is used to estimate the parameters, as well as the reliability function, and the Gibbs and Metropolis samplers data sets are used to predict the behavior of further observations from the distribution. Two data sets are used to illustrate the Bayesian procedure.  相似文献   

8.
Kalman filtering techniques are widely used by engineers to recursively estimate random signal parameters which are essentially coefficients in a large-scale time series regression model. These Bayesian estimators depend on the values assumed for the mean and covariance parameters associated with the initial state of the random signal. This paper considers a likelihood approach to estimation and tests of hypotheses involving the critical initial means and covariances. A computationally simple convergent iterative algorithm is used to generate estimators which depend only on standard Kalman filter outputs at each successive stage. Conditions are given under which the maximum likelihood estimators are consistent and asymptotically normal. The procedure is illustrated using a typical large-scale data set involving 10-dimensional signal vectors.  相似文献   

9.
This note compares a Bayesian Markov chain Monte Carlo approach implemented by Watanabe with a maximum likelihood ML approach based on an efficient importance sampling procedure to estimate dynamic bivariate mixture models. In these models, stock price volatility and trading volume are jointly directed by the unobservable number of price-relevant information arrivals, which is specified as a serially correlated random variable. It is shown that the efficient importance sampling technique is extremely accurate and that it produces results that differ significantly from those reported by Watanabe.  相似文献   

10.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

11.
Generalized additive mixed models are proposed for overdispersed and correlated data, which arise frequently in studies involving clustered, hierarchical and spatial designs. This class of models allows flexible functional dependence of an outcome variable on covariates by using nonparametric regression, while accounting for correlation between observations by using random effects. We estimate nonparametric functions by using smoothing splines and jointly estimate smoothing parameters and variance components by using marginal quasi-likelihood. Because numerical integration is often required by maximizing the objective functions, double penalized quasi-likelihood is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the method proposed is that it allows us to make systematic inference on all model components within a unified parametric mixed model framework and can be easily implemented by fitting a working generalized linear mixed model by using existing statistical software. A bias correction procedure is also proposed to improve the performance of double penalized quasi-likelihood for sparse data. We illustrate the method with an application to infectious disease data and we evaluate its performance through simulation.  相似文献   

12.
We study how different prior assumptions on the spatially structured heterogeneity term of the convolution hierarchical Bayesian model for spatial disease data could affect the results of an ecological analysis when response and exposure exhibit a strong spatial pattern. We show that in this case the estimate of the regression parameter could be strongly biased, both by analyzing the association between lung cancer mortality and education level on a real dataset and by a simulation experiment. The analysis is based on a hierarchical Bayesian model with a time dependent covariate in which we allow for a latency period between exposure and mortality, with time and space random terms and misaligned exposure-disease data.  相似文献   

13.
In general, the precise date of onset of pregnancy is unknown and may only be estimated from ultrasound biometric measurements of the embryo. We want to estimate the density of the random variables corresponding to the interval between last menstrual period and true onset of pregnancy. The observations correspond to the variables of interest up to an additive noise. We suggest an estimation procedure based on deconvolution. It requires the knowledge of the density of the noise which is not available. But we have at our disposal another specific sample with replicate observations for twin pregnancies. This allows both to estimate the noise density and to improve the deconvolution step. Convergence rates of the final estimator are studied and compared with other settings. Our estimator involves a cut‐off parameter for which we propose a cross‐validation type procedure. Lastly, we estimate the target density in spontaneous pregnancies with an estimation of the noise obtained from replicate observations in twin pregnancies.  相似文献   

14.
Abstract

This paper deals with Bayesian estimation and prediction for the inverse Weibull distribution with shape parameter α and scale parameter λ under general progressive censoring. We prove that the posterior conditional density functions of α and λ are both log-concave based on the assumption that λ has a gamma prior distribution and α follows a prior distribution with log-concave density. Then, we present the Gibbs sampling strategy to estimate under squared-error loss any function of the unknown parameter vector (α, λ) and find credible intervals, as well as to obtain prediction intervals for future order statistics. Monte Carlo simulations are given to compare the performance of Bayesian estimators derived via Gibbs sampling with the corresponding maximum likelihood estimators, and a real data analysis is discussed in order to illustrate the proposed procedure. Finally, we extend the developed methodology to other two-parameter distributions, including the Weibull, Burr type XII, and flexible Weibull distributions, and also to general progressive hybrid censoring.  相似文献   

15.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

16.
Zero-inflated count models are increasingly employed in many fields in case of “zero-inflation”. In modeling road traffic crashes, it has also shown to be useful in obtaining a better model-fitting when zero crash counts are over-presented. However, the general specification of zero-inflated model can not account for the multilevel data structure in crash data, which may be an important source of over-dispersion. This paper examines zero-inflated Poisson regression with site-specific random effects (REZIP) with comparison to random effect Poisson model and standard zero-inflated poison model. A practical and flexible procedure, using Bayesian inference with Markov Chain Monte Carlo algorithm and cross-validation predictive density techniques, is applied for model calibration and suitability assessment. Using crash data in Singapore (1998–2005), the illustrative results demonstrate that the REZIP model may significantly improve the model-fitting and predictive performance of crash prediction models. This improvement can contribute to traffic safety management and engineering practices such as countermeasure design and safety evaluation of traffic treatments.  相似文献   

17.
Multivariate density estimation plays an important role in investigating the mechanism of high-dimensional data. This article describes a nonparametric Bayesian approach to the estimation of multivariate densities. A general procedure is proposed for constructing Feller priors for multivariate densities and their theoretical properties as nonparametric priors are established. A blocked Gibbs sampling algorithm is devised to sample from the posterior of the multivariate density. A simulation study is conducted to evaluate the performance of the procedure.  相似文献   

18.
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.  相似文献   

19.
Bayesian marginal inference via candidate's formula   总被引:2,自引:0,他引:2  
Computing marginal probabilities is an important and fundamental issue in Bayesian inference. We present a simple method which arises from a likelihood identity for computation. The likelihood identity, called Candidate's formula, sets the marginal probability as a ratio of the prior likelihood to the posterior density. Based on Markov chain Monte Carlo output simulated from the posterior distribution, a nonparametric kernel estimate is used to estimate the posterior density contained in that ratio. This derived nonparametric Candidate's estimate requires only one evaluation of the posterior density estimate at a point. The optimal point for such evaluation can be chosen to minimize the expected mean square relative error. The results show that the best point is not necessarily the posterior mode, but rather a point compromising between high density and low Hessian. For high dimensional problems, we introduce a variance reduction approach to ease the tension caused by data sparseness. A simulation study is presented.  相似文献   

20.
The Lasso has sparked interest in the use of penalization of the log‐likelihood for variable selection, as well as for shrinkage. We are particularly interested in the more‐variables‐than‐observations case of characteristic importance for modern data. The Bayesian interpretation of the Lasso as the maximum a posteriori estimate of the regression coefficients, which have been given independent, double exponential prior distributions, is adopted. Generalizing this prior provides a family of hyper‐Lasso penalty functions, which includes the quasi‐Cauchy distribution of Johnstone and Silverman as a special case. The properties of this approach, including the oracle property, are explored, and an EM algorithm for inference in regression problems is described. The posterior is multi‐modal, and we suggest a strategy of using a set of perfectly fitting random starting values to explore modes in different regions of the parameter space. Simulations show that our procedure provides significant improvements on a range of established procedures, and we provide an example from chemometrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号