首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

2.
Traditional Item Response Theory models assume the distribution of the abilities of the population in study to be Gaussian. However, this may not always be a reasonable assumption, which motivates the development of more general models. This paper presents a generalized approach for the distribution of the abilities in dichotomous 3-parameter Item Response models. A mixture of normal distributions is considered, allowing for features like skewness, multimodality and heavy tails. A solution is proposed to deal with model identifiability issues without compromising the flexibility and practical interpretation of the model. Inference is carried out under the Bayesian Paradigm through a novel MCMC algorithm. The algorithm is designed in a way to favour good mixing and convergence properties and is also suitable for inference in traditional IRT models. The efficiency and applicability of our methodology is illustrated in simulated and real examples.  相似文献   

3.
Quantitative model validation is playing an increasingly important role in performance and reliability assessment of a complex system whenever computer modelling and simulation are involved. The foci of this paper are to pursue a Bayesian probabilistic approach to quantitative model validation with non-normality data, considering data uncertainty and to investigate the impact of normality assumption on validation accuracy. The Box–Cox transformation method is employed to convert the non-normality data, with the purpose of facilitating the overall validation assessment of computational models with higher accuracy. Explicit expressions for the interval hypothesis testing-based Bayes factor are derived for the transformed data in the context of univariate and multivariate cases. Bayesian confidence measure is presented based on the Bayes factor metric. A generalized procedure is proposed to implement the proposed probabilistic methodology for model validation of complicated systems. Classic hypothesis testing method is employed to conduct a comparison study. The impact of data normality assumption and decision threshold variation on model assessment accuracy is investigated by using both classical and Bayesian approaches. The proposed methodology and procedure are demonstrated with a univariate stochastic damage accumulation model, a multivariate heat conduction problem and a multivariate dynamic system.  相似文献   

4.
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China.  相似文献   

5.
We propose a new iterative algorithm, called model walking algorithm, to the Bayesian model averaging method on the longitudinal regression models with AR(1) random errors within subjects. The Markov chain Monte Carlo method together with the model walking algorithm are employed. The proposed method is successfully applied to predict the progression rates on a myopia intervention trial in children.  相似文献   

6.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

7.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

8.
9.
In this paper, we study a new Bayesian approach for the analysis of linearly mixed structures. In particular, we consider the case of hyperspectral images, which have to be decomposed into a collection of distinct spectra, called endmembers, and a set of associated proportions for every pixel in the scene. This problem, often referred to as spectral unmixing, is usually considered on the basis of the linear mixing model (LMM). In unsupervised approaches, the endmember signatures have to be calculated by an endmember extraction algorithm, which generally relies on the supposition that there are pure (unmixed) pixels contained in the image. In practice, this assumption may not hold for highly mixed data and consequently the extracted endmember spectra differ from the true ones. A way out of this dilemma is to consider the problem under the normal compositional model (NCM). Contrary to the LMM, the NCM treats the endmembers as random Gaussian vectors and not as deterministic quantities. Existing Bayesian approaches for estimating the proportions under the NCM are restricted to the case that the covariance matrix of the Gaussian endmembers is a multiple of the identity matrix. The self-evident conclusion is that this model is not suitable when the variance differs from one spectral channel to the other, which is a common phenomenon in practice. In this paper, we first propose a Bayesian strategy for the estimation of the mixing proportions under the assumption of varying variances in the spectral bands. Then we generalize this model to handle the case of a completely unknown covariance structure. For both algorithms, we present Gibbs sampling strategies and compare their performance with other, state of the art, unmixing routines on synthetic as well as on real hyperspectral fluorescence spectroscopy data.  相似文献   

10.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

11.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

12.
ABSTRACT

The living hours data of individuals' time spent on daily activities are compositional and include many zeros because individuals do not pursue all activities every day. Thus, we should exercise caution in using such data for empirical analyses. The Bayesian method offers several advantages in analyzing compositional data. In this study, we analyze the time allocation of Japanese married couples using the Bayesian model. Based on the Bayes factors, we compare models that consider and do not consider the correlations between married couples' time use data. The model that considers the correlation shows superior performance. We show that the Bayesian method can adequately take into account the correlations of wives' and husbands' living hours, facilitating the calculation of partial effects that their activities' variables have on living hours. The partial effects of the model that considers the correlations between the couples' time use are easily calculated from the posterior results.  相似文献   

13.
This paper presents a comprehensive review and comparison of five computational methods for Bayesian model selection, based on MCMC simulations from posterior model parameter distributions. We apply these methods to a well-known and important class of models in financial time series analysis, namely GARCH and GARCH-t models for conditional return distributions (assuming normal and t-distributions). We compare their performance with the more common maximum likelihood-based model selection for simulated and real market data. All five MCMC methods proved reliable in the simulation study, although differing in their computational demands. Results on simulated data also show that for large degrees of freedom (where the t-distribution becomes more similar to a normal one), Bayesian model selection results in better decisions in favor of the true model than maximum likelihood. Results on market data show the instability of the harmonic mean estimator and reliability of the advanced model selection methods.  相似文献   

14.
In most practical applications, the quality of count data is often compromised due to errors-in-variables (EIVs). In this paper, we apply Bayesian approach to reduce bias in estimating the parameters of count data regression models that have mismeasured independent variables. Furthermore, the exposure model is misspecified with a flexible distribution, hence our approach remains robust against any departures from normality in its true underlying exposure distribution. The proposed method is also useful in realistic situations as the variance of EIVs is estimated instead of assumed as known, in contrast with other methods of correcting bias especially in count data EIVs regression models. We conduct simulation studies on synthetic data sets using Markov chain Monte Carlo simulation techniques to investigate the performance of our approach. Our findings show that the flexible Bayesian approach is able to estimate the values of the true regression parameters consistently and accurately.  相似文献   

15.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   

16.
Motivated by the Singapore Longitudinal Aging Study (SLAS), we propose a Bayesian approach for the estimation of semiparametric varying-coefficient models for longitudinal continuous and cross-sectional binary responses. These models have proved to be more flexible than simple parametric regression models. Our development is a new contribution towards their Bayesian solution, which eases computational complexity. We also consider adapting all kinds of familiar statistical strategies to address the missing data issue in the SLAS. Our simulation results indicate that a Bayesian imputation (BI) approach performs better than complete-case (CC) and available-case (AC) approaches, especially under small sample designs, and may provide more useful results in practice. In the real data analysis for the SLAS, the results for longitudinal outcomes from BI are similar to AC analysis, differing from those with CC analysis.  相似文献   

17.
Very often, in psychometric research, as in educational assessment, it is necessary to analyze item response from clustered respondents. The multiple group item response theory (IRT) model proposed by Bock and Zimowski [12] provides a useful framework for analyzing such type of data. In this model, the selected groups of respondents are of specific interest such that group-specific population distributions need to be defined. The usual assumption for parameter estimation in this model, which is that the latent traits are random variables following different symmetric normal distributions, has been questioned in many works found in the IRT literature. Furthermore, when this assumption does not hold, misleading inference can result. In this paper, we consider that the latent traits for each group follow different skew-normal distributions, under the centered parameterization. We named it skew multiple group IRT model. This modeling extends the works of Azevedo et al. [4], Bazán et al. [11] and Bock and Zimowski [12] (concerning the latent trait distribution). Our approach ensures that the model is identifiable. We propose and compare, concerning convergence issues, two Monte Carlo Markov Chain (MCMC) algorithms for parameter estimation. A simulation study was performed in order to evaluate parameter recovery for the proposed model and the selected algorithm concerning convergence issues. Results reveal that the proposed algorithm recovers properly all model parameters. Furthermore, we analyzed a real data set which presents asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of negative asymmetry for some latent trait distributions.  相似文献   

18.
This article considers Bayesian inference, posterior and predictive, in the context of a start-up demonstration test procedure in which rejection of a unit occurs when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance. The method developed for implementing Bayesian inference in this article is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation. This method permits the analysis to go forth, even when the results of the start-up test procedure are not completely recorded or reported. An illustrative example is included.  相似文献   

19.
In a 2 × 2 contingency table, when the sample size is small, there may be a number of cells that contain few or no observations, usually referred to as sparse data. In such cases, a common recommendation in the conventional frequentist methods is adding a small constant to every cell of the observed table to find the estimates of the unknown parameters. However, this approach is based on asymptotic properties of the estimates and may work poorly for small samples. An alternative approach would be to use Bayesian methods in order to provide better insight into the problem of sparse data coupled with fewer centers, which would otherwise be difficult to carry out the analysis. In this article, an attempt has been made to use hierarchical Bayesian model to a multicenter data on the effect of a surgical treatment with standard foot care among leprosy patients with posterior tibial nerve damage which is summarized as seven 2 × 2 tables. Monte Carlo Markov Chain (MCMC) techniques are applied in estimating the parameters of interest under sparse data setup.  相似文献   

20.
Bayesian analysis of panel data using an MTAR model   总被引:1,自引:0,他引:1  
Bayesian analysis of panel data using a class of momentum threshold autoregressive (MTAR) models is considered. Posterior estimation of parameters of the MTAR models is done by using a simple Markov Chain Monte Carlo (MCMC) algorithm. Selection of appropriate differenced variables, test for asymmetry and unit roots are recast as model selections and a simple way of computing posterior probabilities of the candidate models is proposed. The proposed method is applied to the yearly unemployment rates of 51 US states and the results show strong evidence of stationarity and asymmetry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号