首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   

2.
The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coefficients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov chain Monte Carlo approach to sampling from the known posterior distribution. Problems with hundreds of regressor variables become quite feasible. We give a simple method of assigning the hyperparameters of the prior distribution. The posterior predictive distribution is derived and the approach illustrated on compositional analysis of data involving three sugars with 160 near infrared absorbances as regressors.  相似文献   

3.
In spatial generalized linear mixed models (SGLMMs), statistical inference encounters problems, since random effects in the model imply high-dimensional integrals to calculate the marginal likelihood function. In this article, we temporarily treat parameters as random variables and express the marginal likelihood function as a posterior expectation. Hence, the marginal likelihood function is approximated using the obtained samples from the posterior density of the latent variables and parameters given the data. However, in this setting, misspecification of prior distribution of correlation function parameter and problems associated with convergence of Markov chain Monte Carlo (MCMC) methods could have an unpleasant influence on the likelihood approximation. To avoid these challenges, we utilize an empirical Bayes approach to estimate prior hyperparameters. We also use a computationally efficient hybrid algorithm by combining inverse Bayes formula (IBF) and Gibbs sampler procedures. A simulation study is conducted to assess the performance of our method. Finally, we illustrate the method applying a dataset of standard penetration test of soil in an area in south of Iran.  相似文献   

4.
Bayesian dynamic linear models (DLMs) are useful in time series modelling, because of the flexibility that they off er for obtaining a good forecast. They are based on a decomposition of the relevant factors which explain the behaviour of the series through a series of state parameters. Nevertheless, the DLM as developed by West and Harrison depend on additional quantities, such as the variance of the system disturbances, which, in practice, are unknown. These are referred to here as 'hyper-parameters' of the model. In this paper, DLMs with autoregressive components are used to describe time series that show cyclic behaviour. The marginal posterior distribution for state parameters can be obtained by weighting the conditional distribution of state parameters by the marginal distribution of hyper-parameters. In most cases, the joint distribution of the hyperparameters can be obtained analytically but the marginal distributions of the components cannot, so requiring numerical integration. We propose to obtain samples of the hyperparameters by a variant of the sampling importance resampling method. A few applications are shown with simulated and real data sets.  相似文献   

5.
We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset. To this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. In this context, we exhibit a closed-form expression of the marginal likelihood which allows to infer an optimal number of components. We also propose a heuristic based on the expected shape of the marginal likelihood curve in order to choose the hyperparameters. In nonasymptotic frameworks, we show on simulated data that this exact dimensionality selection approach is competitive with both Bayesian and frequentist state-of-the-art methods.  相似文献   

6.
ABSTRACT

In this paper, we consider an effective Bayesian inference for censored Student-t linear regression model, which is a robust alternative to the usual censored Normal linear regression model. Based on the mixture representation of the Student-t distribution, we propose a non-iterative Bayesian sampling procedure to obtain independently and identically distributed samples approximately from the observed posterior distributions, which is different from the iterative Markov Chain Monte Carlo algorithm. We conduct model selection and influential analysis using the posterior samples to choose the best fitted model and to detect latent outliers. We illustrate the performance of the procedure through simulation studies, and finally, we apply the procedure to two real data sets, one is the insulation life data with right censoring and the other is the wage rates data with left censoring, and we get some interesting results.  相似文献   

7.
Optimal design theory deals with the assessment of the optimal joint distribution of all independent variables prior to data collection. In many practical situations, however, covariates are involved for which the distribution is not previously determined. The optimal design problem may then be reformulated in terms of finding the optimal marginal distribution for a specific set of variables. In general, the optimal solution may depend on the unknown (conditional) distribution of the covariates. This article discusses the D A -maximin procedure to account for the uncertain distribution of the covariates. Sufficient conditions will be given under which the uniform design of a subset of independent discrete variables is D A -maximin. The sufficient conditions are formulated for Generalized Linear Mixed Models with an arbitrary number of quantitative and qualitative independent variables and random effects.  相似文献   

8.
Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.  相似文献   

9.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   

10.
Looking at predictive accuracy is a traditional method for comparing models. A natural method for approximating out-of-sample predictive accuracy is leave-one-out cross-validation (LOOCV)—we alternately hold out each case from a full dataset and then train a Bayesian model using Markov chain Monte Carlo without the held-out case; at last we evaluate the posterior predictive distribution of all cases with their actual observations. However, actual LOOCV is time-consuming. This paper introduces two methods, namely iIS and iWAIC, for approximating LOOCV with only Markov chain samples simulated from a posterior based on a full dataset. iIS and iWAIC aim at improving the approximations given by importance sampling (IS) and WAIC in Bayesian models with possibly correlated latent variables. In iIS and iWAIC, we first integrate the predictive density over the distribution of the latent variables associated with the held-out without reference to its observation, then apply IS and WAIC approximations to the integrated predictive density. We compare iIS and iWAIC with other approximation methods in three kinds of models: finite mixture models, models with correlated spatial effects, and a random effect logistic regression model. Our empirical results show that iIS and iWAIC give substantially better approximates than non-integrated IS and WAIC and other methods.  相似文献   

11.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization.  相似文献   

12.
Approximate Bayesian computation (ABC) methods permit approximate inference for intractable likelihoods when it is possible to simulate from the model. However, they perform poorly for high-dimensional data and in practice must usually be used in conjunction with dimension reduction methods, resulting in a loss of accuracy which is hard to quantify or control. We propose a new ABC method for high-dimensional data based on rare event methods which we refer to as RE-ABC. This uses a latent variable representation of the model. For a given parameter value, we estimate the probability of the rare event that the latent variables correspond to data roughly consistent with the observations. This is performed using sequential Monte Carlo and slice sampling to systematically search the space of latent variables. In contrast, standard ABC can be viewed as using a more naive Monte Carlo estimate. We use our rare event probability estimator as a likelihood estimate within the pseudo-marginal Metropolis–Hastings algorithm for parameter inference. We provide asymptotics showing that RE-ABC has a lower computational cost for high-dimensional data than standard ABC methods. We also illustrate our approach empirically, on a Gaussian distribution and an application in infectious disease modelling.  相似文献   

13.
Summary. We consider the construction of perfect samplers for posterior distributions associated with mixtures of exponential families and conjugate priors, starting with a perfect slice sampler in the spirit of Mira and co-workers. The methods rely on a marginalization akin to Rao–Blackwellization and illustrate the duality principle of Diebolt and Robert. A first approximation embeds the finite support distribution on the latent variables within a continuous support distribution that is easier to simulate by slice sampling, but we later demonstrate that the approximation can be very poor. We conclude by showing that an alternative perfect sampler based on a single backward chain can be constructed. This alternative can handle much larger sample sizes than the slice sampler first proposed.  相似文献   

14.
This article presents a novel and simple approach to the estimation of a marginal likelihood, in a Bayesian context. The estimate is based on a Markov chain output which provides samples from the posterior distribution and an additional latent variable. It is the mean of this latent variable which provides the estimate for the value of the marginal likelihood.  相似文献   

15.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

16.
In this article, we utilize a scale mixture of Gaussian random field as a tool for modeling spatial ordered categorical data with non-Gaussian latent variables. In fact, we assume a categorical random field is created by truncating a Gaussian Log-Gaussian latent variable model to accommodate heavy tails. Since the traditional likelihood approach for the considered model involves high-dimensional integrations which are computationally intensive, the maximum likelihood estimates are obtained using a stochastic approximation expectation–maximization algorithm. For this purpose, Markov chain Monte Carlo methods are employed to draw from the posterior distribution of latent variables. A numerical example illustrates the methodology.  相似文献   

17.
Bivariate count data arise in several different disciplines (epidemiology, marketing, sports statistics just to name a few) and the bivariate Poisson distribution being a generalization of the Poisson distribution plays an important role in modelling such data. In the present paper we present a Bayesian estimation approach for the parameters of the bivariate Poisson model and provide the posterior distributions in closed forms. It is shown that the joint posterior distributions are finite mixtures of conditionally independent gamma distributions for which their full form can be easily deduced by a recursively updating scheme. Thus, the need of applying computationally demanding MCMC schemes for Bayesian inference in such models will be removed, since direct sampling from the posterior will become available, even in cases where the posterior distribution of functions of the parameters is not available in closed form. In addition, we define a class of prior distributions that possess an interesting conjugacy property which extends the typical notion of conjugacy, in the sense that both prior and posteriors belong to the same family of finite mixture models but with different number of components. Extension to certain other models including multivariate models or models with other marginal distributions are discussed.  相似文献   

18.

Pairwise likelihood is a limited information estimation method that has also been used for estimating the parameters of latent variable and structural equation models. Pairwise likelihood is a special case of composite likelihood methods that uses lower-order conditional or marginal log-likelihoods instead of the full log-likelihood. The composite likelihood to be maximized is a weighted sum of marginal or conditional log-likelihoods. Weighting has been proposed for increasing efficiency, but the choice of weights is not straightforward in most applications. Furthermore, the importance of leaving out higher-order scores to avoid duplicating lower-order marginal information has been pointed out. In this paper, we approach the problem of weighting from a sampling perspective. More specifically, we propose a sampling method for selecting pairs based on their contribution to the total variance from all pairs. The sampling approach does not aim to increase efficiency but to decrease the estimation time, especially in models with a large number of observed categorical variables. We demonstrate the performance of the proposed methodology using simulated examples and a real application.

  相似文献   

19.
For observable indicators with ordered categories one can assume underlying latent variables following certain marginal distributions. Transforming the latent variables changes its marginal distributions but not the observable qualitative indicators. The joint distribution of the latent variables can be constructed from the marginal distributions. There is a broad class of multivariate distributions for which the observable indicators are equivalent. By choosing the multivariate normal distribution from this class we can analyse a linear relationship between the transformed latent variables. This leads to latent structural equation models. Estimation of these latter models is therefore more general than the distributional assumption might initially suggest. Robustness of the estimation procedure is also discussed for deviations from this distribution family. Using ordinal business survey data of the German Ifo-institute we test the efficiency of firms' price expectations implied by the rational expectation hypothesis.  相似文献   

20.
Models that involve an outcome variable, covariates, and latent variables are frequently the target for estimation and inference. The presence of missing covariate or outcome data presents a challenge, particularly when missingness depends on the latent variables. This missingness mechanism is called latent ignorable or latent missing at random and is a generalisation of missing at random. Several authors have previously proposed approaches for handling latent ignorable missingness, but these methods rely on prior specification of the joint distribution for the complete data. In practice, specifying the joint distribution can be difficult and/or restrictive. We develop a novel sequential imputation procedure for imputing covariate and outcome data for models with latent variables under latent ignorable missingness. The proposed method does not require a joint model; rather, we use results under a joint model to inform imputation with less restrictive modelling assumptions. We discuss identifiability and convergence‐related issues, and simulation results are presented in several modelling settings. The method is motivated and illustrated by a study of head and neck cancer recurrence. Imputing missing data for models with latent variables under latent‐dependent missingness without specifying a full joint model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号