首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We show how to infer about a finite population proportion using data from a possibly biased sample. In the absence of any selection bias or survey weights, a simple ignorable selection model, which assumes that the binary responses are independent and identically distributed Bernoulli random variables, is not unreasonable. However, this ignorable selection model is inappropriate when there is a selection bias in the sample. We assume that the survey weights (or their reciprocals which we call ‘selection’ probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the selection bias, we assume that there is some correlation between the binary responses and the selection probabilities (e.g., there may be a somewhat higher/lower proportion of positive responses among the sampled units than among the nonsampled units). We use a Bayesian nonignorable selection model to accommodate the selection mechanism. We use Markov chain Monte Carlo methods to fit the nonignorable selection model. We illustrate our method using numerical examples obtained from NHIS 1995 data.  相似文献   

2.
We extend the Bayesian Model Averaging (BMA) framework to dynamic panel data models with endogenous regressors using a Limited Information Bayesian Model Averaging (LIBMA) methodology. Monte Carlo simulations confirm the asymptotic performance of our methodology both in BMA and selection, with high posterior inclusion probabilities for all relevant regressors, and parameter estimates very close to their true values. In addition, we illustrate the use of LIBMA by estimating a dynamic gravity model for bilateral trade. Once model uncertainty, dynamics, and endogeneity are accounted for, we find several factors that are robustly correlated with bilateral trade. We also find that applying methodologies that do not account for either dynamics or endogeneity (or both) results in different sets of robust determinants.  相似文献   

3.
The purpose of this paper is threefold. First, we obtain the asymptotic properties of the modified model selection criteria proposed by Hurvich et al. (1990. Improved estimators of Kullback-Leibler information for autoregressive model selection in small samples. Biometrika 77, 709–719) for autoregressive models. Second, we provide some highlights on the better performance of this modified criteria. Third, we extend the modification introduced by these authors to model selection criteria commonly used in the class of self-exciting threshold autoregressive (SETAR) time series models. We show the improvements of the modified criteria in their finite sample performance. In particular, for small and medium sample size the frequency of selecting the true model improves for the consistent criteria and the root mean square error (RMSE) of prediction improves for the efficient criteria. These results are illustrated via simulation with SETAR models in which we assume that the threshold and the parameters are unknown.  相似文献   

4.
A Bayesian approach is presented for model selection in nonparametric regression with Gaussian errors and in binary nonparametric regression. A smoothness prior is assumed for each component of the model and the posterior probabilities of the candidate models are approximated using the Bayesian information criterion. We study the model selection method by simulation and show that it has excellent frequentist properties and gives improved estimates of the regression surface. All the computations are carried out efficiently using the Gibbs sampler.  相似文献   

5.
We implement a joint model for mixed multivariate longitudinal measurements, applied to the prediction of time until lung transplant or death in idiopathic pulmonary fibrosis. Specifically, we formulate a unified Bayesian joint model for the mixed longitudinal responses and time-to-event outcomes. For the longitudinal model of continuous and binary responses, we investigate multivariate generalized linear mixed models using shared random effects. Longitudinal and time-to-event data are assumed to be independent conditional on available covariates and shared parameters. A Markov chain Monte Carlo algorithm, implemented in OpenBUGS, is used for parameter estimation. To illustrate practical considerations in choosing a final model, we fit 37 different candidate models using all possible combinations of random effects and employ a deviance information criterion to select a best-fitting model. We demonstrate the prediction of future event probabilities within a fixed time interval for patients utilizing baseline data, post-baseline longitudinal responses, and the time-to-event outcome. The performance of our joint model is also evaluated in simulation studies.  相似文献   

6.
In recent years, numerous statisticians have focused their attention on the Bayesian analysis of different paired comparison models. While studying paired comparison techniques, the Davidson model is considered to be one of the famous paired comparison models in the available literature. In this article, we have introduced an amendment in the Davidson model which has been commenced to accommodate the option of not distinguishing the effects of two treatments when they are compared pairwise. Having made this amendment, the Bayesian analysis of the Amended Davidson model is performed using the noninformative (uniform and Jeffreys’) and informative (Dirichlet–gamma–gamma) priors. To study the model and to perform the Bayesian analysis with the help of an example, we have obtained the joint and marginal posterior distributions of the parameters, their posterior estimates, graphical presentations of the marginal densities, preference and predictive probabilities and the posterior probabilities to compare the treatment parameters.  相似文献   

7.
This article proposes a Bayesian approach, which can simultaneously obtain the Bayesian estimates of unknown parameters and random effects, to analyze nonlinear reproductive dispersion mixed models (NRDMMs) for longitudinal data with nonignorable missing covariates and responses. The logistic regression model is employed to model the missing data mechanisms for missing covariates and responses. A hybrid sampling procedure combining the Gibber sampler and the Metropolis-Hastings algorithm is presented to draw observations from the conditional distributions. Because missing data mechanism is not testable, we develop the logarithm of the pseudo-marginal likelihood, deviance information criterion, the Bayes factor, and the pseudo-Bayes factor to compare several competing missing data mechanism models in the current considered NRDMMs with nonignorable missing covaraites and responses. Three simulation studies and a real example taken from the paediatric AIDS clinical trial group ACTG are used to illustrate the proposed methodologies. Empirical results show that our proposed methods are effective in selecting missing data mechanism models.  相似文献   

8.
We propose Bayesian methods with five types of priors to estimate cell probabilities in an incomplete multi-way contingency table under nonignorable nonresponse. In this situation, the maximum likelihood (ML) estimates often fall in the boundary solution, causing the ML estimates to become unstable. To deal with such a multi-way table, we present an EM algorithm which generalizes the previous algorithm used for incomplete one-way tables. Three of the five types of priors were previously introduced while the other two are newly proposed to reflect different response patterns between respondents and nonrespondents. Data analysis and simulation studies show that Bayesian estimates based on the old three priors can be worse than the ML regardless of occurrence of boundary solution, contrary to previous studies. The Bayesian estimates from the two new priors are most preferable when a boundary solution occurs. We provide an illustrating example using data for a study of the relationship between a mother's smoking and her newborn's weight.  相似文献   

9.
Measurement error is a commonly addressed problem in psychometrics and the behavioral sciences, particularly where gold standard data either does not exist or are too expensive. The Bayesian approach can be utilized to adjust for the bias that results from measurement error in tests. Bayesian methods offer other practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. In this paper we consider a logistic regression model where both the response and a binary covariate are subject to misclassification. We assume both a continuous measure and a binary diagnostic test are available for the response variable but no gold standard test is assumed available. We consider a fully Bayesian analysis that affords such adjustments, accounting for the sources of error and correcting estimates of the regression parameters. Based on the results from our example and simulations, the models that account for misclassification produce more statistically significant results, than the models that ignore misclassification. A real data example on math disorders is considered.  相似文献   

10.
We consider surveys with one or more callbacks and use a series of logistic regressions to model the probabilities of nonresponse at first contact and subsequent callbacks. These probabilities are allowed to depend on covariates as well as the categorical variable of interest and so the nonresponse mechanism is nonignorable. Explicit formulae for the score functions and information matrices are given for some important special cases to facilitate implementation of the method of scoring for obtaining maximum likelihood estimates of the model parameters. For estimating finite population quantities, we suggest the imputation and prediction approaches as alternatives to weighting adjustment. Simulation results suggest that the proposed methods work well in reducing the bias due to nonresponse. In our study, the imputation and prediction approaches perform better than weighting adjustment and they continue to perform quite well in simulations involving misspecified response models.  相似文献   

11.
Selection of a parsimonious model as a basis for statistical inference from capture-recapture data is critical, especially when using open models in the analysis of multiple, interrelated data sets (e.g. males and females, with two to three age classes, over three to five areas and 10-15 years). The global (i.e. most general) model for such data sets might contain hundreds of survival and recapture parameters. Here, we focus on a series of nested models of the Cormack-Jolly-Seber type wherein the likelihood arises from products of multinomial distributions whose cell probabilities are reparameterized in terms of survival ( phi ) and mean capture ( p ) probabilities. This paper presents numerical results on two information-theoretic methods for model selection when the capture probabilities are heterogeneous over individual animals: Akaike's Information Criterion (AIC) and a dimension-consistent criterion (CAIC), derived from a Bayesian viewpoint. Quality of model selection was evaluated based on the relative Euclidian distance between standardized theta and theta (parameter theta is vector-valued and contains the survival ( phi ) and mean capture ( p ) probabilities); this quantity (RSS = sigma{(theta i - theta i )/ theta i } 2 ) is a sum of squared bias and variance. Thus, the quality of inference (RSS) was judged by comparing the performance of the two information criteria and the use of the true model (used to generate the data), in relation to the model that provided the smallest RSS. We found that heterogeneity in the capture probabilities had a negligible effect on model selection using AIC or CAIC. Model size increased as sample size increased with both AIC- and CAIC-selected models.  相似文献   

12.
13.
A Bayesian approach is developed for analysing item response models with nonignorable missing data. The relevant model for the observed data is estimated concurrently in conjunction with the item response model for the missing-data process. Since the approach is fully Bayesian, it can be easily generalized to more complicated and realistic models, such as those models with covariates. Furthermore, the proposed approach is illustrated with item response data modelled as the multidimensional graded response models. Finally, a simulation study is conducted to assess the extent to which the bias caused by ignoring the missing-data mechanism can be reduced.  相似文献   

14.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   

15.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

16.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

17.
Computation of normalizing constants is a fundamental mathematical problem in various disciplines, particularly in Bayesian model selection problems. A sampling-based technique known as bridge sampling (Meng and Wong in Stat Sin 6(4):831–860, 1996) has been found to produce accurate estimates of normalizing constants and is shown to possess good asymptotic properties. For small to moderate sample sizes (as in situations with limited computational resources), we demonstrate that the (optimal) bridge sampler produces biased estimates. Specifically, when one density (we denote as $$p_2$$) is constructed to be close to the target density (we denote as $$p_1$$) using method of moments, our simulation-based results indicate that the correlation-induced bias through the moment-matching procedure is non-negligible. More crucially, the bias amplifies as the dimensionality of the problem increases. Thus, a series of theoretical as well as empirical investigations is carried out to identify the nature and origin of the bias. We then examine the effect of sample size allocation on the accuracy of bridge sampling estimates and discovered that one possibility of reducing both the bias and standard error with a small increase in computational effort is by drawing extra samples from the moment-matched density $$p_2$$ (which we assume easy to sample from), provided that the evaluation of $$p_1$$ is not too expensive. We proceed to show how the simple adaptive approach we termed “splitting” manages to alleviate the correlation-induced bias at the expense of a higher standard error, irrespective of the dimensionality involved. We also slightly modified the strategy suggested by Wang et al. (Warp bridge sampling: the next generation, Preprint, 2019. arXiv:1609.07690) to address the issue of the increase in standard error due to splitting, which is later generalized to further improve the efficiency. We conclude the paper by offering our insights of the application of a combination of these adaptive methods to improve the accuracy of bridge sampling estimates in Bayesian applications (where posterior samples are typically expensive to generate) based on the preceding investigations, with an application to a practical example.  相似文献   

18.
Summary.  We discuss the analysis of data from single-nucleotide polymorphism arrays comparing tumour and normal tissues. The data consist of sequences of indicators for loss of heterozygosity (LOH) and involve three nested levels of repetition: chromosomes for a given patient, regions within chromosomes and single-nucleotide polymorphisms nested within regions. We propose to analyse these data by using a semiparametric model for multilevel repeated binary data. At the top level of the hierarchy we assume a sampling model for the observed binary LOH sequences that arises from a partial exchangeability argument. This implies a mixture of Markov chains model. The mixture is defined with respect to the Markov transition probabilities. We assume a non-parametric prior for the random-mixing measure. The resulting model takes the form of a semiparametric random-effects model with the matrix of transition probabilities being the random effects. The model includes appropriate dependence assumptions for the two remaining levels of the hierarchy, i.e. for regions within chromosomes and for chromosomes within patient. We use the model to identify regions of increased LOH in a data set coming from a study of treatment-related leukaemia in children with an initial cancer diagnostic. The model successfully identifies the desired regions and performs well compared with other available alternatives.  相似文献   

19.
We consider several alternatives to the continuous exponential-Poisson distribution in order to accommodate the occurrence of zeros. Three of these are modifications of the exponential-Poisson model. One of these remains a fully continuous model. The other models we consider are all semi-continuous models, each with a discrete point mass at zero and a continuous density on the positive values. All of the models are applied to two environmental data sets concerning precipitation, and their Bayesian analyses using MCMC are discussed. This discussion covers convergence of the MCMC simulations and model selection procedures and considerations.  相似文献   

20.
The Bayesian information criterion (BIC) is widely used for variable selection. We focus on the regression setting for which variations of the BIC have been proposed. A version that includes the Fisher Information matrix of the predictor variables performed best in one published study. In this article, we extend the evaluation, introduce a performance measure involving how closely posterior probabilities are approximated, and conclude that the version that includes the Fisher Information often favors regression models having more predictors, depending on the scale and correlation structure of the predictor matrix. In the image analysis application that we describe, we therefore prefer the standard BIC approximation because of its relative simplicity and competitive performance at approximating the true posterior probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号