首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article considers misclassification of categorical covariates in the context of regression analysis; if unaccounted for, such errors usually result in mis-estimation of model parameters. With the presence of additional covariates, we exploit the fact that explicitly modelling non-differential misclassification with respect to the response leads to a mixture regression representation. Under the framework of mixture of experts, we enable the reclassification probabilities to vary with other covariates, a situation commonly caused by misclassification that is differential on certain covariates and/or by dependence between the misclassified and additional covariates. Using Bayesian inference, the mixture approach combines learning from data with external information on the magnitude of errors when it is available. In addition to proving the theoretical identifiability of the mixture of experts approach, we study the amount of efficiency loss resulting from covariate misclassification and the usefulness of external information in mitigating such loss. The method is applied to adjust for misclassification on self-reported cocaine use in the Longitudinal Studies of HIV-Associated Lung Infections and Complications.  相似文献   

2.
Measurement error is a commonly addressed problem in psychometrics and the behavioral sciences, particularly where gold standard data either does not exist or are too expensive. The Bayesian approach can be utilized to adjust for the bias that results from measurement error in tests. Bayesian methods offer other practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. In this paper we consider a logistic regression model where both the response and a binary covariate are subject to misclassification. We assume both a continuous measure and a binary diagnostic test are available for the response variable but no gold standard test is assumed available. We consider a fully Bayesian analysis that affords such adjustments, accounting for the sources of error and correcting estimates of the regression parameters. Based on the results from our example and simulations, the models that account for misclassification produce more statistically significant results, than the models that ignore misclassification. A real data example on math disorders is considered.  相似文献   

3.
Estimated associations between an outcome variable and misclassified covariates tend to be biased when the methods of estimation that ignore the classification error are applied. Available methods to account for misclassification often require the use of a validation sample (i.e. a gold standard). In practice, however, such a gold standard may be unavailable or impractical. We propose a Bayesian approach to adjust for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov Chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. A simulated numerical example and a real clinical example are given to illustrate the proposed approach. Our results suggest that the estimated log odds of inpatient care and the corresponding standard deviation are much larger in our proposed method compared with the models ignoring misclassification. Ignoring misclassification produces downwardly biased estimates and underestimate uncertainty.  相似文献   

4.
ABSTRACT

Motivated by a longitudinal oral health study, the Signal-Tandmobiel® study, a Bayesian approach has been developed to model misclassified ordinal response data. Two regression models have been considered to incorporate misclassification in the categorical response. Specifically, probit and logit models have been developed. The computational difficulties have been avoided by using data augmentation. This idea is exploited to derive efficient Markov chain Monte Carlo methods. Although the method is proposed for ordered categories, it can also be implemented for unordered ones in a simple way. The model performance is shown through a simulation-based example and the analysis of the motivating study.  相似文献   

5.
Since Dorfman's seminal work on the subject, group testing has been widely adopted in epidemiological studies. In Dorfman's context of detecting syphilis, group testing entails pooling blood samples and testing the pools, as opposed to testing individual samples. A negative pool indicates all individuals in the pool free of syphilis antigen, whereas a positive pool suggests one or more individuals carry the antigen. With covariate information collected, researchers have considered regression models that allow one to estimate covariate‐adjusted disease probability. We study maximum likelihood estimators of covariate effects in these regression models when the group testing response is prone to error. We show that, when compared with inference drawn from individual testing data, inference based on group testing data can be more resilient to response misclassification in terms of bias and efficiency. We provide valuable guidance on designing the group composition to alleviate adverse effects of misclassification on statistical inference.  相似文献   

6.
The authors consider the Bayesian analysis of multinomial data in the presence of misclassification. Misclassification of the multinomial cell entries leads to problems of identifiability which are categorized into two types. The first type, referred to as the permutation‐type nonidentifiabilities, may be handled with constraints that are suggested by the structure of the problem. Problems of identifiability of the second type are addressed with informative prior information via Dirichlet distributions. Computations are carried out using a Gibbs sampling algorithm.  相似文献   

7.
Generalized linear models are addressed to describe the dependence of data on explanatory variables when the binary outcome is subject to misclassification. Both probit and t-link regressions for misclassified binary data under Bayesian methodology are proposed. The computational difficulties have been avoided by using data augmentation. The idea of using a data augmentation framework (with two types of latent variables) is exploited to derive efficient Gibbs sampling and expectation–maximization algorithms. Besides, this formulation has allowed to obtain the probit model as a particular case of the t-link model. Simulation examples are presented to illustrate the model performance when comparing with standard methods that do not consider misclassification. In order to show the potential of the proposed approaches, a real data problem arising when studying hearing loss caused by exposure to occupational noise is analysed.  相似文献   

8.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

9.
We consider data with a nominal grouping variable and a binary response variable. The grouping variable is measured without error, but the response variable is measured using a fallible device subject to misclassification. To achieve model identifiability, we use the double-sampling scheme which requires obtaining a subsample of the original data or another independent sample. This sample is then classified by both the fallible device and another infallible device regarding the response variable. We propose two Wald tests for testing the association between the two variables and illustrate the test using traffic data. The Type-I error rate and power of the tests are examined using simulations and a modified Wald test is recommended.  相似文献   

10.
In this paper, we examine the performance of Anderson's classification statistic with covariate adjustment in comparison with the usual Anderson's classification statistic without covariate adjustment in a two-population normal covariate classification problem. The same problem has been investigated using different methods of comparison by some authors. See the bibliography. The aim of this paper is to give a direct comparison based upon the asymptotic probabilities of misclassification. It is shown that for large equal sample size of a training sample from each population, Anderson's classification statistic with covariate adjustment and cut-off point equal to zero, has better performance.  相似文献   

11.
This article considers multinomial data subject to misclassification in the presence of covariates which affect both the misclassification probabilities and the true classification probabilities. A subset of the data may be subject to a secondary measurement according to an infallible classifier. Computations are carried out in a Bayesian setting where it is seen that the prior has an important role in driving the inference. In addition, a new and less problematic definition of nonidentifiability is introduced and is referred to as hierarchical nonidentifiability.  相似文献   

12.
In this contribution we aim at improving ordinal variable selection in the context of causal models for credit risk estimation. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric thus keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to ordinal covariates that correspond to various ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear method for variable averaging of the estimated default probabilities.  相似文献   

13.
The most popular goodness of fit test for a multinomial distribution is the chi-square test. But this test is generally biased if observations are subject to misclassification, In this paper we shall discuss how to define a new test procedure when we have double sample data obtained from the true and fallible devices. An adjusted chi-square test based on the imputation method and the likelihood ratio test are considered, Asymptotically, these two procedures are equivalent. However, an example and simulation results show that the former procedure is not only computationally simpler but also more powerful under finite sample situations.  相似文献   

14.
Model misspecification and noisy covariate measurements are two common sources of inference bias. There is considerable literature on the consequences of each problem in isolation. In this paper, however, the author investigates their combined effects. He shows that in the context of linear models, the large‐sample error in estimating the regression function may be partitioned in two terms quantifying the impact of these sources of bias. This decomposition reveals trade‐offs between the two biases in question in a number of scenarios. After presenting a finite‐sample version of the decomposition, the author studies the relative impacts of model misspecification, covariate imprecision, and sampling variability, with reference to the detectability of the model misspecification via diagnostic plots.  相似文献   

15.
Finite mixture of regression (FMR) models are aimed at characterizing subpopulation heterogeneity stemming from different sets of covariates that impact different groups in a population. We address the contemporary problem of simultaneously conducting covariate selection and determining the number of mixture components from a Bayesian perspective that can incorporate prior information. We propose a Gibbs sampling algorithm with reversible jump Markov chain Monte Carlo implementation to accomplish concurrent covariate selection and mixture component determination in FMR models. Our Bayesian approach contains innovative features compared to previously developed reversible jump algorithms. In addition, we introduce component-adaptive weighted g priors for regression coefficients, and illustrate their improved performance in covariate selection. Numerical studies show that the Gibbs sampler with reversible jump implementation performs well, and that the proposed weighted priors can be superior to non-adaptive unweighted priors.  相似文献   

16.
We formulate closed-form Bayesian estimators for two complementary Poisson rate parameters using double sampling with data subject to misclassification and error free data. We also derive closed-form Bayesian estimators for two misclassification parameters in the modified Poisson model we assume. We use our results to determine credible sets for the rate and misclassification parameters. Additionally, we use MCMC methods to determine Bayesian estimators for three or more rate parameters and the misclassification parameters. We also perform a limited Monte Carlo simulation to examine the characteristics of these estimators. We demonstrate the efficacy of the new Bayesian estimators and highest posterior density regions with examples using two real data sets.  相似文献   

17.
In the analysis of correlated ordered data, mixed-effect models are frequently used to control the subject heterogeneity effects. A common assumption in fitting these models is the normality of random effects. In many cases, this is unrealistic, making the estimation results unreliable. This paper considers several flexible models for random effects and investigates their properties in the model fitting. We adopt a proportional odds logistic regression model and incorporate the skewed version of the normal, Student's t and slash distributions for the effects. Stochastic representations for various flexible distributions are proposed afterwards based on the mixing strategy approach. This reduces the computational burden being performed by the McMC technique. Furthermore, this paper addresses the identifiability restrictions and suggests a procedure to handle this issue. We analyze a real data set taken from an ophthalmic clinical trial. Model selection is performed by suitable Bayesian model selection criteria.  相似文献   

18.
The joint distribution of the true and observed values of a variable that is subject to measurement error is bivariate normal.An important special case occurs when we want the joint probability of the true value being below a cutoff point and the observed value above it.In that case the required integral can be simply evaluated using a Gaussian quadrature formula, which can easily be evaluated using a calculator.This formula is used to estimate the probabilities of misclassification of participants in screening programs for hypertension.It shows that basing a diagnosis on a single visit, at which a single measurement was made leads to a very high risk of misclassification.The probability of a subject having a blood pressure below the cutoff point, given that the observed pressure is above it, would be 0.45.Increasing the number of visits to three, and measuring the blood pressure twice at each visit, as advocated by Rosner and Polk (1979), would bring the probability down to 0.29.  相似文献   

19.
In discriminant analysis it is often desirable to find a small subset of the variables that were measured on the individuals of known origin, to be used for classifying individuals of unknown origin. In this paper a Bayesian approach to variable selection is used that includes an additional subset of variables for future classification if the additional measurement costs for this subsst are lower than the resulting reduction in expected misclassification costs.  相似文献   

20.
We propose a mixture model for data with an ordinal outcome and a longitudinal covariate that is subject to missingness. Data from a tailored telephone delivered, smoking cessation intervention for construction laborers are used to illustrate the method, which considers as an outcome a categorical measure of smoking cessation, and evaluates the effectiveness of the motivational telephone interviews on this outcome. We propose two model structures for the longitudinal covariate, for the case when the missing data are missing at random, and when the missing data mechanism is non-ignorable. A generalized EM algorithm is used to obtain maximum likelihood estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号