首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we show that this support is the space of all continuous functions when the usual covariance kernels are chosen and an appropriate prior is used on the smoothing parameters of the covariance kernel. We then show that a large class of Gaussian process priors achieve weak as well as strong posterior consistency (under some regularity conditions) at true densities that are either continuous or piecewise continuous.  相似文献   

2.
Generalized linear models with random effects and/or serial dependence are commonly used to analyze longitudinal data. However, the computation and interpretation of marginal covariate effects can be difficult. This led Heagerty (1999, 2002) to propose models for longitudinal binary data in which a logistic regression is first used to explain the average marginal response. The model is then completed by introducing a conditional regression that allows for the longitudinal, within‐subject, dependence, either via random effects or regressing on previous responses. In this paper, the authors extend the work of Heagerty to handle multivariate longitudinal binary response data using a triple of regression models that directly model the marginal mean response while taking into account dependence across time and across responses. Markov Chain Monte Carlo methods are used for inference. Data from the Iowa Youth and Families Project are used to illustrate the methods.  相似文献   

3.
The estimation of random effects in frailty models is an important problem in survival analysis. Testing for the presence of random effects can be essential to improving model efficiency. Posterior consistency in dispersion parameters and coefficients of the frailty model was demonstrated in theory and simulations using the posterior induced by Cox’s partial likelihood and simple priors. We also conducted simulation studies to test for the presence of random effects; the proposed method performed well in several simulations. Data analysis was also conducted. The proposed method is easily tractable and can be used to develop various methods for Bayesian inference in frailty models.  相似文献   

4.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

5.
ABSTRACT

Clustered observations such as longitudinal data are often analysed with generalized linear mixed models (GLMM). Approximate Bayesian inference for GLMMs with normally distributed random effects can be done using integrated nested Laplace approximations (INLA), which is in general known to yield accurate results. However, INLA is known to be less accurate for GLMMs with binary response. For longitudinal binary response data it is common that patients do not change their health state during the study period. In this case the grouping covariate perfectly predicts a subset of the response, which implies a monotone likelihood with diverging maximum likelihood (ML) estimates for cluster-specific parameters. This is known as quasi-complete separation. In this paper we demonstrate, based on longitudinal data from a randomized clinical trial and two simulations, that the accuracy of INLA decreases with increasing degree of cluster-specific quasi-complete separation. Comparing parameter estimates by INLA, Markov chain Monte Carlo sampling and ML shows that INLA increasingly deviates from the other methods in such a scenario.  相似文献   

6.
The Bayes factor is a key tool in hypothesis testing. Nevertheless, the important issue of which priors should be used to develop objective Bayes factors remains open. The authors consider this problem in the context of the one-way random effects model. They use concepts such as orthogonality, predictive matching and invariance to justify a specific form of the priors for common parameters and derive the intrinsic and divergence based prior for the new parameter. The authors show that both intrinsic priors or divergence-based priors produce consistent Bayes factors. They illustrate the methods and compare them with other proposals.  相似文献   

7.
The random effects in a gamma process are introduced in terms of its scale parameter. However, the scale parameter affects both its mean and variance. Hence, the variation of the degradation rates and the within degradation increments are expected to be large. For some products, the random effects affect just the rate or just the volatility of the process. Thus, two modifications of the parameters' structure of the gamma process are proposed. One implies that the random effects affect just the volatility and the second just the rate. A Bayesian estimation approach is provided and implemented in two case studies.  相似文献   

8.
In this paper we consider the regression problem for random sets of the Boolean-model type. Regression modeling of the Boolean random sets using some explanatory variables are classified according to the type of these variables as propagation, growth or propagation-growth models. The maximum likelihood estimation of the parameters for the propagation model is explained in detail for some specific link functions using three methods. These three methods of estimation are also compared in a simulation study.  相似文献   

9.
The two-sample problem of inferring whether two random samples have equal underlying distributions is formulated within the Bayesian framework as a comparison of two posterior predictive inferences rather than as a problem of model selection. The suggested approach is argued to be particularly advantageous in problems where the objective is to evaluate evidence in support of equality, along with being robust to the priors used and being capable of handling improper priors. Our approach is contrasted with the Bayes factor in a normal setting and finally, an additional example is considered where the observed samples are realizations of Markov chains.  相似文献   

10.
Previous approaches to establishing posterior consistency of Bayesian regression problems have used general theorems that involve verifying sufficient conditions for posterior consistency. In this article, we consider a direct approach by computing the posterior density explicitly and evaluating its asymptotic behavior. For this purpose, we deal with a sample size dependent prior based on a truncated regression function with increasing sample size, and evaluate the asymptotic properties of the resulting posterior. Based on a concept called posterior density consistency, we attempt to understand posterior consistency. As an application, we illustrate that the posterior density of an orthogonal semiparametric regression model is consistent.  相似文献   

11.
A new estimator in linear models with equi-correlated random errors is postulated. Consistency properties of the proposed estimator and the ordinary least squares estimator are studied. It is shown that the new estimator has smaller variance than the usual least squares estimator under some mild conditions. In addition, it is observed that the new estimator tends to be weakly consistent in many cases where the usual least squares estimator is not.  相似文献   

12.
Min Wang  Xiaoqian Sun 《Statistics》2013,47(5):1104-1115
In practical situations, most experimental designs often yield unbalanced data which have different numbers of observations per unit because of cost constraints, missing data, etc. In this paper, we consider the Bayesian approach to hypothesis testing or model selection under the one-way unbalanced fixed-effects analysis-of-variance (ANOVA) model. We adopt Zellner's g-prior with the beta-prime distribution for g, which results in an explicit closed-form expression of the Bayes factor without integral representation. Furthermore, we investigate the model selection consistency of the Bayes factor under three different asymptotic scenarios: either the number of units goes to infinity, the number of observations per unit goes to infinity, or both go to infinity. The results presented extend some existing ones of the Bayes factor for the balanced ANOVA models in the literature.  相似文献   

13.
For right-censored data, Zeng et al. [Semiparametirc transformation modes with random effects for clustered data. Statist Sin. 2008;18:355–377] proposed a class of semiparametric transformation models with random effects to formulate the effects of possibly time-dependent covariates on clustered failure times. In this article, we demonstrate that the approach of Zeng et al. can be extended to analyse clustered doubly censored data. The asymptotic properties of the nonparametric maximum likelihood estimators of the model parameters are derived. A simulation study is conducted to investigate the performance of the proposed estimators.  相似文献   

14.
Practical Bayesian data analysis involves manipulating and summarizing simulations from the posterior distribution of the unknown parameters. By manipulation we mean computing posterior distributions of functions of the unknowns, and generating posterior predictive distributions. The results need to be summarized both numerically and graphically. We introduce, and implement in R, an object-oriented programming paradigm based on a random variable object type that is implicitly represented by simulations. This makes it possible to define vector and array objects that may contain both random and deterministic quantities, and syntax rules that allow to treat these objects like any numeric vectors or arrays, providing a solution to various problems encountered in Bayesian computing involving posterior simulations. We illustrate the use of this new programming environment with examples of Bayesian computing, demonstrating missing-value imputation, nonlinear summary of regression predictions, and posterior predictive checking.  相似文献   

15.
Bayesian random effects models may be fitted using Gibbs sampling, but the Gibbs sampler can be slow mixing due to what might be regarded as lack of model identifiability. This slow mixing substantially increases the number of iterations required during Gibbs sampling. We present an analysis of data on immunity after Rubella vaccinations which results in a slow-mixing Gibbs sampler. We show that this problem of slow mixing can be resolved by transforming the random effects and then, if desired, expressing their joint prior distribution as a sequence of univariate conditional distributions. The resulting analysis shows that the decline in antibodies after Rubella vaccination is relatively shallow compared to the decline in antibodies which has been shown after Hepatitis B vaccination.  相似文献   

16.
This article extends a random preventive maintenance scheme, called repair alert model, when there exist environmental variables that effect on system lifetimes. It can be used for implementing age-dependent maintenance policies on engineering devices. In other words, consider a device that works for a job and is subject to failure at a random time X, and the maintenance crew can avoid the failure by a possible replacement at some random time Z. The new model is flexible to including covariates with both fixed and random effects. The problem of estimating parameters is also investigated in details. Here, the observations are in the form of random signs censoring data (RSCD) with covariates. Therefore, this article generalizes derived statistical inferences on the basis of RSCD albeit without covariates in past literature. To do this, it is assumed that the system lifetime distribution belongs to the log-location-scale family of distributions. A real dataset is also analyzed on basis of the results obtained.  相似文献   

17.
We develop a hierarchical Bayesian approach for inference in random coefficient dynamic panel data models. Our approach allows for the initial values of each unit's process to be correlated with the unit-specific coefficients. We impose a stationarity assumption for each unit's process by assuming that the unit-specific autoregressive coefficient is drawn from a logitnormal distribution. Our method is shown to have favorable properties compared to the mean group estimator in a Monte Carlo study. We apply our approach to analyze energy and protein intakes among individuals from the Philippines.  相似文献   

18.
By means of the Hausdorff α-entropy introduced by Xing and Ranneby (2009 Xing , Y. , Ranneby , B. ( 2009 ). Sufficient conditions for Bayesian consistency . J. Statist. Plann. Inference. 139 : 24792489 . [Google Scholar]), we give two theorems on rates of in-probability convergence of posterior distributions. The result is applied in study of the Bernstein polynomial priors.  相似文献   

19.
Likelihood-based marginalized models using random effects have become popular for analyzing longitudinal categorical data. These models permit direct interpretation of marginal mean parameters and characterize the serial dependence of longitudinal outcomes using random effects [12,22]. In this paper, we propose model that expands the use of previous models to accommodate longitudinal nominal data. Random effects using a new covariance matrix with a Kronecker product composition are used to explain serial and categorical dependence. The Quasi-Newton algorithm is developed for estimation. These proposed methods are illustrated with a real data set and compared with other standard methods.  相似文献   

20.
A question of fundamental importance for meta-analysis of heterogeneous multidimensional data studies is how to form a best consensus estimator of common parameters, and what uncertainty to attach to the estimate. This issue is addressed for a class of unbalanced linear designs which include classical growth curve models. The solution obtained is similar to the popular DerSimonian and Laird (1986) method for a simple meta-analysis model. By using almost unbiased variance estimators, an estimator of the covariance matrix of this procedure is derived. Combination of these methods is illustrated by two examples and are compared via simulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号