首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we propose a novel procedure, for the estimation of semiparametric survival functions. The proposed technique adapts penalized likelihood survival models to the context of lifetime value modeling. The method extends classical Cox model by introducing a smoothing parameter that can be estimated by means of penalized maximum likelihood procedures. Markov Chain Monte Carlo methods are employed to effectively estimate such smoothing parameter, using an algorithm which combines Metropolis–Hastings and Gibbs sampling. Our proposal is contextualized and compared with conventional models, with reference to a marketing application that involves the prediction of customer’s lifetime value estimation.  相似文献   

2.
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.  相似文献   

3.
Different strategies have been proposed to improve mixing and convergence properties of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis–Hastings algorithm to the specific target density and require a detailed exploratory analysis of the stationary distribution and/or some preliminary experiments to determine an efficient proposal. Various Metropolis–Hastings algorithms have been suggested that make use of previously sampled states in defining an adaptive proposal density. Here we propose a general class of adaptive Metropolis–Hastings algorithms based on Metropolis–Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as improved versions of the all-purpose adaptive rejection Metropolis sampling (ARMS) algorithm to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.  相似文献   

4.
New Metropolis–Hastings algorithms using directional updates are introduced in this paper. Each iteration of a directional Metropolis–Hastings algorithm consists of three steps (i) generate a line by sampling an auxiliary variable, (ii) propose a new state along the line, and (iii) accept/reject according to the Metropolis–Hastings acceptance probability. We consider two classes of directional updates. The first uses a point in n as auxiliary variable, the second an auxiliary direction vector. The proposed algorithms generalize previous directional updating schemes since we allow the distribution of the auxiliary variable to depend on properties of the target at the current state. By letting the proposal distribution along the line depend on the density of the auxiliary variable, we identify proposal mechanisms that give unit acceptance rate. When we use direction vector as auxiliary variable, we get the advantageous effect of large moves in the Markov chain and hence the autocorrelation length of the samples is small. We apply the directional Metropolis–Hastings algorithms to a Gaussian example, a mixture of Gaussian densities, and a Bayesian model for seismic data.  相似文献   

5.
Using a multivariate latent variable approach, this article proposes some new general models to analyze the correlated bounded continuous and categorical (nominal or/and ordinal) responses with and without non-ignorable missing values. First, we discuss regression methods for jointly analyzing continuous, nominal, and ordinal responses that we motivated by analyzing data from studies of toxicity development. Second, using the beta and Dirichlet distributions, we extend the models so that some bounded continuous responses are replaced for continuous responses. The joint distribution of the bounded continuous, nominal and ordinal variables is decomposed into a marginal multinomial distribution for the nominal variable and a conditional multivariate joint distribution for the bounded continuous and ordinal variables given the nominal variable. We estimate the regression parameters under the new general location models using the maximum-likelihood method. Sensitivity analysis is also performed to study the influence of small perturbations of the parameters of the missing mechanisms of the model on the maximal normal curvature. The proposed models are applied to two data sets: BMI, Steatosis and Osteoporosis data and Tehran household expenditure budgets.  相似文献   

6.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further, recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions, Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters. Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior distributions of the unknown parameters.  相似文献   

7.
This paper aims at evaluating different aspects of Monte Carlo expectation – maximization algorithm to estimate heavy-tailed mixed logistic regression (MLR) models. As a novelty it also proposes a multiple chain Gibbs sampler to generate of the latent variables distributions thus obtaining independent samples. In heavy-tailed MLR models, the analytical forms of the full conditional distributions for the random effects are unknown. Four different Metropolis–Hastings algorithms are assumed to generate from them. We also discuss stopping rules in order to obtain more efficient algorithms in heavy-tailed MLR models. The algorithms are compared through the analysis of simulated and Ascaris Suum data.  相似文献   

8.
We evaluate MCMC sampling schemes for a variety of link functions in generalized linear models with Dirichlet process random effects. First, we find that there is a large amount of variability in the performance of MCMC algorithms, with the slice sampler typically being less desirable than either a Kolmogorov–Smirnov mixture representation or a Metropolis–Hastings algorithm. Second, in fitting the Dirichlet process, dealing with the precision parameter has troubled model specifications in the past. Here we find that incorporating this parameter into the MCMC sampling scheme is not only computationally feasible, but also results in a more robust set of estimates, in that they are marginalized-over rather than conditioned-upon. Applications are provided with social science problems in areas where the data can be difficult to model, and we find that the nonparametric nature of the Dirichlet process priors for the random effects leads to improved analyses with more reasonable inferences.  相似文献   

9.
In likelihood analysis of categorized data, it is well known that within a restricted class of log-linear models the likelihood kernels for multinomial and product multinomial sampling distributions are identical. In practical terms the estimation procedure for one is appropriate for the other. There does not appear to be a widespread realization that a similar result holds for a wide class of the Grizzle, Starmer, and Koch (1969) weighted least squares techniques. In this report such a fundamental relationship is explicitly presented and illustrated through two analyses of Bartlett's (1935) data.  相似文献   

10.
The present article discusses alternative regression models and estimation methods for dealing with multivariate fractional response variables. Both conditional mean models, estimable by quasi-maximum likelihood, and fully parametric models (Dirichlet and Dirichlet-multinomial), estimable by maximum likelihood, are considered. A new parameterization is proposed for the parametric models, which accommodates the most common specifications for the conditional mean (e.g., multinomial logit, nested logit, random parameters logit, dogit). The text also discusses at some length the specification analysis of fractional regression models, proposing several tests that can be performed through artificial regressions. Finally, an extensive Monte Carlo study evaluates the finite sample properties of most of the estimators and tests considered.  相似文献   

11.
Sampling from the posterior distribution in generalized linear mixed models   总被引:5,自引:0,他引:5  
Generalized linear mixed models provide a unified framework for treatment of exponential family regression models, overdispersed data and longitudinal studies. These problems typically involve the presence of random effects and this paper presents a new methodology for making Bayesian inference about them. The approach is simulation-based and involves the use of Markov chain Monte Carlo techniques. The usual iterative weighted least squares algorithm is extended to include a sampling step based on the Metropolis–Hastings algorithm thus providing a unified iterative scheme. Non-normal prior distributions for the regression coefficients and for the random effects distribution are considered. Random effect structures with nesting required by longitudinal studies are also considered. Particular interests concern the significance of regression coefficients and assessment of the form of the random effects. Extensions to unknown scale parameters, unknown link functions, survival and frailty models are outlined.  相似文献   

12.
Albert and Chib introduced a complete Bayesian method to analyze data arising from the generalized linear model in which they used the Gibbs sampling algorithm facilitated by latent variables. Recently, Cowles proposed an alternative algorithm to accelerate the convergence of the Albert-Chib algorithm. The novelty in this latter algorithm is achieved by using a Hastings algorithm to generate latent variables and bin boundary parameters jointly instead of individually from their respective full conditionals. In the same spirit, we reparameterize the cumulative-link generalized linear model to accelerate the convergence of Cowles’ algorithm even further. One important advantage of our method is that for the three-bin problem it does not require the Hastings algorithm. In addition, for problems with more than three bins, while the Hastings algorithm is required, we provide a proposal density based on the Dirichlet distribution which is more natural than the truncated normal density used in the competing algorithm. Also, using diagnostic procedures recommended in the literature for the Markov chain Monte Carlo algorithm (both single and multiple runs) we show that our algorithm is substantially better than the one recently obtained. Precisely, our algorithm provides faster convergence and smaller autocorrelations between the iterates. Using the probit link function, extensive results are obtained for the three-bin and the five-bin multinomial ordinal data problems.  相似文献   

13.
Particle Markov Chain Monte Carlo methods are used to carry out inference in nonlinear and non-Gaussian state space models, where the posterior density of the states is approximated using particles. Current approaches usually perform Bayesian inference using either a particle marginal Metropolis–Hastings (PMMH) algorithm or a particle Gibbs (PG) sampler. This paper shows how the two ways of generating variables mentioned above can be combined in a flexible manner to give sampling schemes that converge to a desired target distribution. The advantage of our approach is that the sampling scheme can be tailored to obtain good results for different applications. For example, when some parameters and the states are highly correlated, such parameters can be generated using PMMH, while all other parameters are generated using PG because it is easier to obtain good proposals for the parameters within the PG framework. We derive some convergence properties of our sampling scheme and also investigate its performance empirically by applying it to univariate and multivariate stochastic volatility models and comparing it to other PMCMC methods proposed in the literature.  相似文献   

14.
Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis–Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. A method has previously been developed to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in this theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it cannot provide the guarantees offered by analytical proof.  相似文献   

15.
The estimation of multinomial logit models today is routine. With this increased use has also come a need for testing. A test to determine whether choices can be combined is important. This paper presents a likelihood ratio test for combining choices in multinomial logit models. The use of the test is demonstrated with a simple example.  相似文献   

16.
We propose a new type of stochastic ordering which imposes a monotone tendency in differences between one multinomial probability and a known standard one. An estimation procedure is proposed for the constrained maximum likelihood estimate, and then the asymptotic null distribution is derived for the likelihood ratio test statistic for testing equality of two multinomial distributions against the new stochastic ordering. An alternative test is also discussed based on Neyman modified minimum chi-square estimator. These tests are illustrated with a set of heart disease data.  相似文献   

17.
空间回归模型由于引入了空间地理信息而使得其参数估计变得复杂,因为主要采用最大似然法,致使一般人认为在空间回归模型参数估计中不存在最小二乘法。通过分析空间回归模型的参数估计技术,研究发现,最小二乘法和最大似然法分别用于估计空间回归模型的不同的参数,只有将两者结合起来才能快速有效地完成全部的参数估计。数理论证结果表明,空间回归模型参数最小二乘估计量是最佳线性无偏估计量。空间回归模型的回归参数可以在估计量为正态性的条件下而实施显著性检验,而空间效应参数则不可以用此方法进行检验。  相似文献   

18.
Conventional, parametric multinomial logit models are in general not sufficient for capturing the complex structures of electorates. In this paper, we use a semiparametric multinomial logit model to give an analysis of party preferences along individuals’ characteristics using a sample of the German electorate in 2006. Germany is a particularly strong case for more flexible nonparametric approaches in this context, since due to the reunification and the preceding different political histories the composition of the electorate is very complex and nuanced. Our analysis reveals strong interactions of the covariates age and income, and highly nonlinear shapes of the factor impacts for each party’s likelihood to be supported. Notably, we develop and provide a smoothed likelihood estimator for semiparametric multinomial logit models, which can be applied also in other application fields, such as, e.g., marketing.  相似文献   

19.
We consider exact and approximate Bayesian computation in the presence of latent variables or missing data. Specifically we explore the application of a posterior predictive distribution formula derived in Sweeting And Kharroubi (2003), which is a particular form of Laplace approximation, both as an importance function and a proposal distribution. We show that this formula provides a stable importance function for use within poor man’s data augmentation schemes and that it can also be used as a proposal distribution within a Metropolis-Hastings algorithm for models that are not analytically tractable. We illustrate both uses in the case of a censored regression model and a normal hierarchical model, with both normal and Student t distributed random effects. Although the predictive distribution formula is motivated by regular asymptotic theory, it is not necessary that the likelihood has a closed form or that it possesses a local maximum.  相似文献   

20.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号