首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider a random sample X1, X2,…, Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (oBRIX level) of orange juice produced in different countries.  相似文献   

2.
Data with censored initiating and terminating times arises quite frequently in acquired immunodeficiency syndrome (AIDS) epidemiologic studies. Analysis of such data involves a complicated bivariate likelihood, which is difficult to deal with computationally. Bayesian analysis, op the other hand, presents added complexities that have yet to be resolved. By exploiting the simple form of a complete data likelihood and utilizing the power of a Markov Chain Monte Carlo (MCMC) algorithm, this paper presents a methodology for fitting Bayesian regression models to such data. The proposed methods extend the work of Sinha (1997), who considered non-parametric Bayesian analysis of this type of data. The methodology is illustiated with an application to a cohort of HIV infected hemophiliac patients.  相似文献   

3.
We propose a new iterative algorithm, called model walking algorithm, to the Bayesian model averaging method on the longitudinal regression models with AR(1) random errors within subjects. The Markov chain Monte Carlo method together with the model walking algorithm are employed. The proposed method is successfully applied to predict the progression rates on a myopia intervention trial in children.  相似文献   

4.
This article considers Bayesian inference, posterior and predictive, in the context of a start-up demonstration test procedure in which rejection of a unit occurs when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance. The method developed for implementing Bayesian inference in this article is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation. This method permits the analysis to go forth, even when the results of the start-up test procedure are not completely recorded or reported. An illustrative example is included.  相似文献   

5.
Abstract

Using simultaneous Bayesian modeling, an attempt is made to analyze data on the size of lymphedema occurring in the arms of breast cancer patients after breast cancer surgery (as the longitudinal data) and the time interval for disease progression (as the time-to-event occurrence). A model based on a multivariate skew t distribution is shown to provide the best fit. This outcome was confirmed by simulation studies too.  相似文献   

6.
In this article we consider the problem of detecting changes in level and trend in time series model in which the number of change-points is unknown. The approach of Bayesian stochastic search model selection is introduced to detect the configuration of changes in a time series. The number and positions of change-points are determined by a sequence of change-dependent parameters. The sequence is estimated by its posterior distribution via the maximum a posteriori (MAP) estimation. Markov chain Monte Carlo (MCMC) method is used to estimate posterior distributions of parameters. Some actual data examples including a time series of traffic accidents and two hydrological time series are analyzed.  相似文献   

7.
GARCH models include most of the stylized facts of financial time series and they have been largely used to analyse discrete financial time series. In the last years, continuous-time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behaviour of some NASDAQ time series.  相似文献   

8.
We present a new class of models to fit longitudinal data, obtained with a suitable modification of the classical linear mixed-effects model. For each sample unit, the joint distribution of the random effect and the random error is a finite mixture of scale mixtures of multivariate skew-normal distributions. This extension allows us to model the data in a more flexible way, taking into account skewness, multimodality and discrepant observations at the same time. The scale mixtures of skew-normal form an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal, skew-Student-t, skew-slash and the skew-contaminated normal distributions as special cases, being a flexible alternative to the use of the corresponding symmetric distributions in this type of models. A simple efficient MCMC Gibbs-type algorithm for posterior Bayesian inference is employed. In order to illustrate the usefulness of the proposed methodology, two artificial and two real data sets are analyzed.  相似文献   

9.
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.  相似文献   

10.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

11.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

12.
In this paper, we introduce a Bayesian Analysis for the Block and Basu bivariate exponential distribution using Markov Chain Monte Carlo (MCMC) methods and considering lifetimes in presence of covariates and censored data. Posterior summaries of interest are obtained using the popular WinBUGS software. Numerical illustrations are introduced considering a medical data set related to the recurrence times of infection for kidney patients and a medical data set related to bone marrow transplantation for leukemia.  相似文献   

13.
In this paper, we propose a Bayesian partition modeling for lifetime data in the presence of a cure fraction by considering a local structure generated by a tessellation which depends on covariates. In this modeling we include information of nominal qualitative variables with more than two categories or ordinal qualitative variables. The proposed modeling is based on a promotion time cure model structure but assuming that the number of competing causes follows a geometric distribution. It is an alternative modeling strategy to the conventional survival regression modeling generally used for modeling lifetime data in the presence of a cure fraction, which models the cure fraction through a (generalized) linear model of the covariates. An advantage of our approach is its ability to capture the effects of covariates in a local structure. The flexibility of having a local structure is crucial to capture local effects and features of the data. The modeling is illustrated on two real melanoma data sets.  相似文献   

14.
In this article we propose mixture of distributions belonging to the biparametric exponential family, considering joint modeling of the mean and variance (or dispersion) parameters. As special cases we consider mixtures of normal and gamma distributions. A novel Bayesian methodology, using Markov Chain Monte Carlo (MCMC) methods, is proposed to obtain the posterior summaries of interest. We include simulations and real data examples to illustrate de performance of the proposal.  相似文献   

15.
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real‐life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Based on ordered ranked set sample, Bayesian estimation of the model parameter as well as prediction of the unobserved data from Rayleigh distribution are studied. The Bayes estimates of the parameter involved are obtained using both squared error and asymmetric loss functions. The Bayesian prediction approach is considered for predicting the unobserved lifetimes based on a two-sample prediction problem. A real life dataset and simulation study are used to illustrate our procedures.  相似文献   

17.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.  相似文献   

18.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

19.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号