首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quantitative Trait Loci (QTL) mapping is a growing field in statistical genetics. However, dealing with this type of data from a statistical perspective is often perilous. In this paper we extend and apply a Markov Chain Monte Carlo Model Composition (MC3) technique to a data set of the Arabidopsis thaliana plant for locating the QTL mapping associated with cotyledon opening. The posterior model probabilities as well as the marginal posterior probabilities of each locus belonging to the model are presented. Furthermore, we show how the MC3 method can be used to deal with the situation where the sample size is less than the number of parameters in a model using a restricted model space approach.  相似文献   

2.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

3.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

4.
Semiparametric reproductive dispersion mixed model (SPRDMM) is a natural extension of the reproductive dispersion model and the semiparametric mixed model. In this paper, we relax the normality assumption of random effects in SPRDMM and use a truncated and centred Dirichlet process prior to specify random effects, and present the Bayesian P-spline to approximate the smoothing unknown function. A hybrid algorithm combining the block Gibbs sampler and the Metropolis–Hastings algorithm is implemented to sample observations from the posterior distribution. Also, we develop Bayesian case deletion influence measure for SPRDMM based on the φ-divergence and present those computationally feasible formulas. Several simulation studies and a real example are presented to illustrate the proposed methodologies.  相似文献   

5.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

6.
In this paper, a multivariate Bayesian variable sampling interval (VSI) control chart for the economic design and optimization of statistical parameters is designed. Based on the VSI sampling strategy of a multivariate Bayesian control chart with dual control limits, the optimal expected cost function is constructed. The proposed model allows the determination of the scheme parameters that minimize the expected cost per time of the process. The effectiveness of the Bayesian VSI chart is estimated through economic comparisons with the Bayesian fixed sampling interval and the Hotelling's T2 chart. This study is an in-depth study on a Bayesian multivariate control chart with variable parameter. Furthermore, it is shown that significant cost improvement may be realized through the new model.  相似文献   

7.
In event history analysis, the problem of modeling two interdependent processes is still not completely solved. In a frequentist framework, there are two most general approaches: the causal approach and the system approach. The recent growing interest in Bayesian statistics suggests some interesting works on survival models and event history analysis in a Bayesian perspective. In this work we present a possible solution for the analysis of dynamic interdependence by a Bayesian perspective in a graphical duration model framework, using marked point processes. Main results from the Bayesian approach and the comparison with the frequentist one are illustrated on a real example: the analysis of the dynamic relationship between fertility and female employment.  相似文献   

8.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

9.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   

10.
In this article, the problem of parameter estimation and variable selection in the Tobit quantile regression model is considered. A Tobit quantile regression with the elastic net penalty from a Bayesian perspective is proposed. Independent gamma priors are put on the l1 norm penalty parameters. A novel aspect of the Bayesian elastic net Tobit quantile regression is to treat the hyperparameters of the gamma priors as unknowns and let the data estimate them along with other parameters. A Bayesian Tobit quantile regression with the adaptive elastic net penalty is also proposed. The Gibbs sampling computational technique is adapted to simulate the parameters from the posterior distributions. The proposed methods are demonstrated by both simulated and real data examples.  相似文献   

11.
We present a Bayesian analysis framework for matrix-variate normal data with dependency structures induced by rows and columns. This framework of matrix normal models includes prior specifications, posterior computation using Markov chain Monte Carlo methods, evaluation of prediction uncertainty, model structure search, and extensions to multidimensional arrays. Compared with Bayesian probabilistic matrix factorization, which integrates a Gaussian prior for single row of the data matrix, our proposed model, namely Bayesian hierarchical kernelized probabilistic matrix factorization, imposes Gaussian Process priors over multiple rows of the matrix. Hence, the learned model explicitly captures the underlying correlation among the rows and the columns. In addition, our method requires no specific assumptions like independence of latent factors for rows and columns, which obtains more flexibility for modeling real data compared to existing works. Finally, the proposed framework can be adapted to a wide range of applications, including multivariate analysis, times series, and spatial modeling. Experiments highlight the superiority of the proposed model in handling model uncertainty and model optimization.  相似文献   

12.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

13.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

14.
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.  相似文献   

15.
The fused lasso penalizes a loss function by the L1 norm for both the regression coefficients and their successive differences to encourage sparsity of both. In this paper, we propose a Bayesian generalized fused lasso modeling based on a normal-exponential-gamma (NEG) prior distribution. The NEG prior is assumed into the difference of successive regression coefficients. The proposed method enables us to construct a more versatile sparse model than the ordinary fused lasso using a flexible regularization term. Simulation studies and real data analyses show that the proposed method has superior performance to the ordinary fused lasso.  相似文献   

16.
Various regression models based on sib-pair data have been developed for mapping quantitative trait loci (QTL) in humans since the seminal paper published in 1972 by Haseman and Elston. Fulker and Cardon [D.W. Fulker, L.R. Cardon, A sib-pair approach to interval mapping of quantitative trait loci, Am. J. Hum. Genet. 54 (1994) 1092–1103] adapted the idea of interval mapping [E.S. Lander, D. Botstein, Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps, Genetics 121 (1989) 185–199] to the Haseman–Elston regression model in order to increase the power of QTL mapping. However, in the interval mapping approach of Fulker and Cardon, the statistic for testing QTL effects does not obey the classical statistical theory and hence critical values of the test can not be appropriately determined. In this article, we consider a new interval mapping approach based on a general sib-pair regression model. A modified Wald test is proposed for the testing of QTL effects. The asymptotic distribution of the modified Wald test statistic is provided and hence the critical values or the p-values of the test can be well determined. Simulation studies are carried out to verify the validity of the modified Wald test and to demonstrate its desirable power.  相似文献   

17.
We consider a Bayesian analysis method of paired survival data using a bivariate exponential model proposed by Moran (1967, Biometrika 54:385–394). Important features of Moran’s model include that the marginal distributions are exponential and the range of the correlation coefficient is between 0 and 1. These contrast with the popular exponential model with gamma frailty. Despite these nice properties, statistical analysis with Moran’s model has been hampered by lack of a closed form likelihood function. In this paper, we introduce a latent variable to circumvent the difficulty in the Bayesian computation. We also consider a model checking procedure using the predictive Bayesian P-value.  相似文献   

18.
Inference on the whole biological system is the recent focus in bioscience. Different biomarkers, although seem to function separately, can actually control some event(s) of interest simultaneously. This fundamental biological principle has motivated the researchers for developing joint models which can explain the biological system efficiently. Because of the advanced biotechnology, huge amount of biological information can be easily obtained in current years. Hence dimension reduction is one of the major issues in current biological research. In this article, we propose a Bayesian semiparametric approach of jointly modeling observed longitudinal trait and event-time data. A sure independence screening procedure based on the distance correlation and a modified version of Bayesian Lasso are used for dimension reduction. Traditional Cox proportional hazards model is used for modeling the event-time. Our proposed model is used for detecting marker genes controlling the biomass and first flowering time of soybean plants. Simulation studies are performed for assessing the practical usefulness of the proposed model. Proposed model can be used for the joint analysis of traits and diseases for humans, animals and plants.  相似文献   

19.
Liu M  Lu W  Shao Y 《Lifetime data analysis》2006,12(4):421-440
When censored time-to-event data are used to map quantitative trait loci (QTL), the existence of nonsusceptible subjects entails extra challenges. If the heterogeneous susceptibility is ignored or inappropriately handled, we may either fail to detect the responsible genetic factors or find spuriously significant locations. In this article, an interval mapping method based on parametric mixture cure models is proposed, which takes into consideration of nonsusceptible subjects. The proposed model can be used to detect the QTL that are responsible for differential susceptibility and/or time-to-event trait distribution. In particular, we propose a likelihood-based testing procedure with genome-wide significance levels calculated using a resampling method. The performance of the proposed method and the importance of considering the heterogeneous susceptibility are demonstrated by simulation studies and an application to survival data from an experiment on mice infected with Listeria monocytogenes.  相似文献   

20.
Bayesian optimal designs have received increasing attention in recent years, especially in biomedical and clinical trials. Bayesian design procedures can utilize the available prior information of the unknown parameters so that a better design can be achieved. With this in mind, this article considers the Bayesian A- and D-optimal designs of the two- and three-parameter Gamma regression model. In this regard, we first obtain the Fisher information matrix of the proposed model and then calculate the Bayesian A- and D-optimal designs assuming various prior distributions such as normal, half-normal, gamma, and uniform distribution for the unknown parameters. All of the numerical calculations are handled in R software. The results of this article are useful in medical and industrial researches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号