首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
Latent class models (LCMs) are used increasingly for addressing a broad variety of problems, including sparse modeling of multivariate and longitudinal data, model-based clustering, and flexible inferences on predictor effects. Typical frequentist LCMs require estimation of a single finite number of classes, which does not increase with the sample size, and have a well-known sensitivity to parametric assumptions on the distributions within a class. Bayesian nonparametric methods have been developed to allow an infinite number of classes in the general population, with the number represented in a sample increasing with sample size. In this article, we propose a new nonparametric Bayes model that allows predictors to flexibly impact the allocation to latent classes, while limiting sensitivity to parametric assumptions by allowing class-specific distributions to be unknown subject to a stochastic ordering constraint. An efficient MCMC algorithm is developed for posterior computation. The methods are validated using simulation studies and applied to the problem of ranking medical procedures in terms of the distribution of patient morbidity.  相似文献   

2.
The assessment of a binary diagnostic test requires a knowledge of the disease status of all the patients in the sample through the application of a gold standard. In practice, the gold standard is not always applied to all of the patients, which leads to the problem of partial verification of the disease. When the accuracy of the diagnostic test is assessed using only those patients whose disease status has been verified using the gold standard, the estimators obtained in this way, known as Naïve estimators, may be biased. In this study, we obtain the explicit expressions of the bias of the Naïve estimators of sensitivity and specificity of a binary diagnostic test. We also carry out simulation experiments in order to study the effect of the verification probabilities on the Naïve estimators of sensitivity and specificity.  相似文献   

3.
ABSTRACT

Multiple comparisons for two or more mean vectors are considered when the dimension of the vectors may exceed the sample size, the design may be unbalanced, populations need not be normal, and the true covariance matrices may be unequal. Pairwise comparisons, including comparisons with a control, and their linear combinations are considered. Under fairly general conditions, the asymptotic multivariate distribution of the vector of test statistics is derived whose quantiles can be used in multiple testing. Simulations are used to show the accuracy of the tests. Real data applications are also demonstrated.  相似文献   

4.
Abstract.  The paper develops empirical Bayes (EB) confidence intervals for population means with distributions belonging to the natural exponential family-quadratic variance function (NEF-QVF) family when the sample size for a particular population is moderate or large. The basis for such development is to find an interval centred around the posterior mean which meets the target coverage probability asymptotically, and then show that the difference between the coverage probabilities of the Bayes and EB intervals is negligible up to a certain order. The approach taken is Edgeworth expansion so that the sample sizes from the different populations need not be significantly large. The proposed intervals meet the target coverage probabilities asymptotically, and are easy to construct. We illustrate use of these intervals in the context of small area estimation both through real and simulated data. The proposed intervals are different from the bootstrap intervals. The latter can be applied quite generally, but the order of accuracy of these intervals in meeting the desired coverage probability is unknown.  相似文献   

5.
In this article, we consider the Bayes and empirical Bayes problem of the current population mean of a finite population when the sample data is available from other similar (m-1) finite populations. We investigate a general class of linear estimators and obtain the optimal linear Bayes estimator of the finite population mean under a squared error loss function that considered the cost of sampling. The optimal linear Bayes estimator and the sample size are obtained as a function of the parameters of the prior distribution. The corresponding empirical Bayes estimates are obtained by replacing the unknown hyperparameters with their respective consistent estimates. A Monte Carlo study is conducted to evaluate the performance of the proposed empirical Bayes procedure.  相似文献   

6.
Summary. Models for multiple-test screening data generally require the assumption that the tests are independent conditional on disease state. This assumption may be unreasonable, especially when the biological basis of the tests is the same. We propose a model that allows for correlation between two diagnostic test results. Since models that incorporate test correlation involve more parameters than can be estimated with the available data, posterior inferences will depend more heavily on prior distributions, even with large sample sizes. If we have reasonably accurate information about one of the two screening tests (perhaps the standard currently used test) or the prevalences of the populations tested, accurate inferences about all the parameters, including the test correlation, are possible. We present a model for evaluating dependent diagnostic tests and analyse real and simulated data sets. Our analysis shows that, when the tests are correlated, a model that assumes conditional independence can perform very poorly. We recommend that, if the tests are only moderately accurate and measure the same biological responses, researchers use the dependence model for their analyses.  相似文献   

7.
A disease prevalence can be estimated by classifying subjects according to whether they have the disease. When gold-standard tests are too expensive to be applied to all subjects, partially validated data can be obtained by double-sampling in which all individuals are classified by a fallible classifier, and some of individuals are validated by the gold-standard classifier. However, it could happen in practice that such infallible classifier does not available. In this article, we consider two models in which both classifiers are fallible and propose four asymptotic test procedures for comparing disease prevalence in two groups. Corresponding sample size formulae and validated ratio given the total sample sizes are also derived and evaluated. Simulation results show that (i) Score test performs well and the corresponding sample size formula is also accurate in terms of the empirical power and size in two models; (ii) the Wald test based on the variance estimator with parameters estimated under the null hypothesis outperforms the others even under small sample sizes in Model II, and the sample size estimated by this test is also accurate; (iii) the estimated validated ratios based on all tests are accurate. The malarial data are used to illustrate the proposed methodologies.  相似文献   

8.
The comparison of the accuracy of two binary diagnostic tests has traditionally required knowledge of the disease status in all of the patients in the sample via the application of a gold standard. In practice, the gold standard is not always applied to all patients in a sample, and the problem of partial verification of the disease arises. The accuracy of a binary diagnostic test can be measured in terms of positive and negative predictive values, which represent the accuracy of a diagnostic test when it is applied to a cohort of patients. In this paper, we deduce the maximum likelihood estimators of predictive values (PVs) of two binary diagnostic tests, and the hypothesis tests to compare these measures when, in the presence of partial disease verification, the verification process only depends on the results of the two diagnostic tests. The effect of verification bias on the naïve estimators of PVs of two diagnostic tests is studied, and simulation experiments are performed in order to investigate the small sample behaviour of hypothesis tests. The hypothesis tests which we have deduced can be applied when all of the patients are verified with the gold standard. The results obtained have been applied to the diagnosis of coronary stenosis.  相似文献   

9.
In a wide variety of biomedical and clinical research studies, sample statistics from diagnostic marker measurements are presented as a means of distinguishing between two populations, such as with and without disease. Intuitively, a larger difference between the mean values of a marker for the two populations, and a smaller spread of values within each population, should lead to more reliable classification rules based on this marker. We formalize this intuitive notion by deriving practical, new, closed-form expressions for the sensitivity and specificity of three different discriminant tests defined in terms of the sample means and standard deviations of diagnostic marker measurements. The three discriminant tests evaluated are based, respectively, on the Euclidean distance and the Mahalanobis distance between means, and a likelihood ratio analysis. Expressions for the effects of measurement error are also presented. Our final expressions assume that the diagnostic markers follow independent normal distributions for the two populations, although it will be clear that other known distributions may be similarly analyzed. We then discuss applications drawn from the medical literature, although the formalism is clearly not restricted to that application.  相似文献   

10.
We study the estimation of the strength of signals corresponding to the high valued observations in multivariate binary data. These problems can arise in a variety of areas, such as mass spectrometry or function magnetic resonance imaging (fMRI), where the underlying signals could be interpreted as a proxy for biochemical or physiological response to a condition or treatment. More specifically, the problem we consider involves estimating the sum of a collection of binomial probabilities corresponding to large values of the associated binomial random variables. We emphasize the case where the dimension is much greater than the sample size, and most of the probabilities of the events of interest are close to zero. Two estimation approaches are proposed: conditional maximum likelihood and nonparametric empirical Bayes. We use these estimators to construct a test of homogeneity for two groups of high dimensional multivariate binary data. Simulation studies on the size and power of the proposed tests are given, and the tests are demonstrated using mass spectrometry data from a breast cancer study.  相似文献   

11.
ABSTRACT

Despite the popularity of the general linear mixed model for data analysis, power and sample size methods and software are not generally available for commonly used test statistics and reference distributions. Statisticians resort to simulations with homegrown and uncertified programs or rough approximations which are misaligned with the data analysis. For a wide range of designs with longitudinal and clustering features, we provide accurate power and sample size approximations for inference about fixed effects in the linear models we call reversible. We show that under widely applicable conditions, the general linear mixed-model Wald test has noncentral distributions equivalent to well-studied multivariate tests. In turn, exact and approximate power and sample size results for the multivariate Hotelling–Lawley test provide exact and approximate power and sample size results for the mixed-model Wald test. The calculations are easily computed with a free, open-source product that requires only a web browser to use. Commercial software can be used for a smaller range of reversible models. Simple approximations allow accounting for modest amounts of missing data. A real-world example illustrates the methods. Sample size results are presented for a multicenter study on pregnancy. The proposed study, an extension of a funded project, has clustering within clinic. Exchangeability among the participants allows averaging across them to remove the clustering structure. The resulting simplified design is a single-level longitudinal study. Multivariate methods for power provide an approximate sample size. All proofs and inputs for the example are in the supplementary materials (available online).  相似文献   

12.
The rapid increase in the number of AIDS cases during the 1980s and the spread of the disease from the high-risk groups into the general population has created widespread concern. In particular, assessing the accuracy of the screening tests used to detect antibodies to the HIV (AIDS) virus in donated blood and determining the prevalance of the disease in the population are fundamental statistical problems. Because the prevalence of AIDS varies widely by geographic region and data on the number of infected blood donors are published regularly, Bayesian methods, which utilize prior results and update them as new data become available, are quite useful. In this paper we develop a Bayesian procedure for estimating the prevalence of a rare disease, the sensitivity and specificity of the screening tests, and the predictive value of a positive or negative screening test. We apply the procedure to data on blood donors in the United States and in Canada. Our results augment those described in Gastwirth (1987) using classical methods. Indeed, we show that the inclusion of sound prior knowledge into the statistical analysis does not yield sufficiently precise estimates of the predictive value of a positive test. Hence confirmatory testing is needed to obtain reliable estimates. The emphasis of the Bayesian predictive paradigm on prediction intervals for future data yields a valuable insight. We demonstrate that using them might have detected a decline in the specificity of the most frequently used screening test earlier than it apparently was.  相似文献   

13.
The introduction of shape parameters into statistical distributions provided flexible models that produced better fit to experimental data. The Weibull and gamma families are prime examples wherein shape parameters produce more reliable statistical models than standard exponential models in lifetime studies. In the presence of many independent gamma populations, one may test equality (or homogeneity) of shape parameters. In this article, we develop two tests for testing shape parameters of gamma distributions using chi-square distributions, stochastic majorization, and Schur convexity. The first one tests hypotheses on the shape parameter of a single gamma distribution. We numerically examine the performance of this test and find that it controls Type I error rate for small samples. To compare shape parameters of a set of independent gamma populations, we develop a test that is unbiased in the sense of Schur convexity. These tests are motivated by the need to have simple, easy to use tests and accurate procedures in case of small samples. We illustrate the new tests using three real datasets taken from engineering and environmental science. In addition, we investigate the Bayes’ factor in this context and conclude that for small samples, the frequentist approach performs better than the Bayesian approach.  相似文献   

14.
In this paper, regressive models are proposed for modeling a sequence of transitions in longitudinal data. These models are employed to predict the future status of the outcome variable of the individuals on the basis of their underlying background characteristics or risk factors. The estimation of parameters and also estimates of conditional and unconditional probabilities are shown for repeated measures. The goodness of fit tests are extended in this paper on the basis of the deviance and the Hosmer–Lemeshow procedures and generalized to repeated measures. In addition, to measure the suitability of the proposed models for predicting the disease status, we have extended the ROC curve approach to repeated measures. The procedure is shown for the conditional models for any order as well as for the unconditional model, to predict the outcome at the end of the study. The test procedures are also suggested. For testing the differences between areas under the ROC curves in subsequent follow-ups, two different test procedures are employed, one of which is based on permutation test. In this paper, an unconditional model is proposed on the basis of conditional models for the disease progression of depression among the elderly population in the USA on the basis of the Health and Retirement Survey data collected longitudinally. The illustration shows that the disease progression observed conditionally can be employed to predict the outcome and the role of selected variables and the previous outcomes can be utilized for predictive purposes. The results show that the percentage of correct predictions of a disease is quite high and the measures of sensitivity and specificity are also reasonably impressive. The extended measures of area under the ROC curve show that the models provide a reasonably good fit in terms of predicting the disease status during a long period of time. This procedure will have extensive applications in the field of longitudinal data analysis where the objective is to obtain estimates of unconditional probabilities on the basis of series of conditional transitional models.  相似文献   

15.
Drug‐induced organ toxicity (DIOT) that leads to the removal of marketed drugs or termination of candidate drugs has been a leading concern for regulatory agencies and pharmaceutical companies. In safety studies, the genomic assays are conducted after the treatment so that drug‐induced adverse effects can occur. Two types of biomarkers are observed: biomarkers of susceptibility and biomarkers of response. This paper presents a statistical model to distinguish two types of biomarkers and procedures to identify susceptible subpopulations. The biomarkers identified are used to develop classification model to identify susceptible subpopulation. Two methods to identify susceptibility biomarkers were evaluated in terms of predictive performance in subpopulation identification, including sensitivity, specificity, and accuracy. Method 1 considered the traditional linear model with a variable‐by‐treatment interaction term, and Method 2 considered fitting a single predictor variable model using only treatment data. Monte Carlo simulation studies were conducted to evaluate the performance of the two methods and impact of the subpopulation prevalence, probability of DIOT, and sample size on the predictive performance. Method 2 appeared to outperform Method 1, which was due to the lack of power for testing the interaction effect. Important statistical issues and challenges regarding identification of preclinical DIOT biomarkers were discussed. In summary, identification of predictive biomarkers for treatment determination highly depends on the subpopulation prevalence. When the proportion of susceptible subpopulation is 1% or less, a very large sample size is needed to ensure observing sufficient number of DIOT responses for biomarker and/or subpopulation identifications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

17.
In this article, we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo expectation–maximization (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix-based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group study of significant cervical lesion diagnosis in women with atypical glandular cells of undetermined significance to compare the diagnostic accuracy of a histology-based evaluation, a carbonic anhydrase-IX biomarker-based test and a human papillomavirus DNA test.  相似文献   

18.
Implementation of the Gibbs sampler for estimating the accuracy of multiple binary diagnostic tests in one population has been investigated. This method, proposed by Joseph, Gyorkos and Coupal, makes use of a Bayesian approach and is used in the absence of a gold standard to estimate the prevalence, the sensitivity and specificity of medical diagnostic tests. The expressions that allow this method to be implemented for an arbitrary number of tests are given. By using the convergence diagnostics procedure of Raftery and Lewis, the relation between the number of iterations of Gibbs sampling and the precision of the estimated quantiles of the posterior distributions is derived. An example concerning a data set of gastro-esophageal reflux disease patients collected to evaluate the accuracy of the water siphon test compared with 24 h pH-monitoring, endoscopy and histology tests is presented. The main message that emerges from our analysis is that implementation of the Gibbs sampler to estimate the parameters of multiple binary diagnostic tests can be critical and convergence diagnostic is advised for this method. The factors which affect the convergence of the chains to the posterior distributions and those that influence the precision of their quantiles are analyzed.  相似文献   

19.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity.  相似文献   

20.
For evaluating diagnostic accuracy of inherently continuous diagnostic tests/biomarkers, sensitivity and specificity are well-known measures both of which depend on a diagnostic cut-off, which is usually estimated. Sensitivity (specificity) is the conditional probability of testing positive (negative) given the true disease status. However, a more relevant question is “what is the probability of having (not having) a disease if a test is positive (negative)?”. Such post-test probabilities are denoted as positive predictive value (PPV) and negative predictive value (NPV). The PPV and NPV at the same estimated cut-off are correlated, hence it is desirable to make the joint inference on PPV and NPV to account for such correlation. Existing inference methods for PPV and NPV focus on the individual confidence intervals and they were developed under binomial distribution assuming binary instead of continuous test results. Several approaches are proposed to estimate the joint confidence region as well as the individual confidence intervals of PPV and NPV. Simulation results indicate the proposed approaches perform well with satisfactory coverage probabilities for normal and non-normal data and, additionally, outperform existing methods with improved coverage as well as narrower confidence intervals for PPV and NPV. The Alzheimer's Disease Neuroimaging Initiative (ADNI) data set is used to illustrate the proposed approaches and compare them with the existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号