首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this work it is investigated theoretically whether the support's length of a continuous variable, which represents a simple health-related index, affects the index's diagnostic ability of a binary health outcome. The aforementioned is attempted by studying the monotony of the index's sensitivity function, which is a measure of its diagnostic ability, in the cases that the index's distribution was either unknown or the uniform. The case of a composite health-related index which is formed by the sum of m component variables is also presented when the distribution of its component variables was either unknown or the uniform. It is proved that a health-related index's sensitivity is a non-decreasing function as to the finite length of its components' support, under certain condition. In addition, similar propositions are presented in the case that a health-related index is distributed normally according to its distribution parameters.  相似文献   

2.
The sensitivity of-a Bayesian inference to prior assumptions is examined by Monte Carlo simulation for the beta-binomial conjugate family of distributions. Results for the effect on a Bayesian probability interval of the binomial parameter indicate that the Bayesian inference is for the most part quite sensitive to misspecification of the prior distribution. The magnitude of the sensitivity depends primarily on the difference of assigned means and variances from the respective means and variances of the actually-sampled prior distributions. The effect of a disparity in form between the assigned prior and actually-sampled distributions was less important for the cases tested.  相似文献   

3.
ABSTRACT

Hazard rate functions are often used in modeling of lifetime data. The Exponential Power Series (EPS) family has a monotone hazard rate function. In this article, the influence of input factors such as time and parameters on the variability of hazard rate function is assessed by local and global sensitivity analysis. Two different indices based on local and global sensitivity indices are presented. The simulation results for two datasets show that the hazard rate functions of the EPS family are sensitive to input parameters. The results also show that the hazard rate function of the EPS family is more sensitive to the exponential distribution than power series distributions.  相似文献   

4.
Missing data pose a serious challenge to the integrity of randomized clinical trials, especially of treatments for prolonged illnesses such as schizophrenia, in which long‐term impact assessment is of great importance, but the follow‐up rates are often no more than 50%. Sensitivity analysis using Bayesian modeling for missing data offers a systematic approach to assessing the sensitivity of the inferences made on the basis of observed data. This paper uses data from an 18‐month study of veterans with schizophrenia to demonstrate this approach. Data were obtained from a randomized clinical trial involving 369 patients diagnosed with schizophrenia that compared long‐acting injectable risperidone with a psychiatrist's choice of oral treatment. Bayesian analysis utilizing a pattern‐mixture modeling approach was used to validate the reported results by detecting bias due to non‐random patterns of missing data. The analysis was applied to several outcomes including standard measures of schizophrenia symptoms, quality of life, alcohol use, and global mental status. The original study results for several measures were confirmed against a wide range of patterns of non‐random missingness. Robustness of the conclusions was assessed using sensitivity parameters. The missing data in the trial did not likely threaten the validity of previously reported results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
7.
8.
The Receiver Operating Characteristic (ROC) curve and the Area Under the Curve (AUC) of the ROC curve are widely used in discovery to compare the performance of diagnostic and prognostic assays. The ROC curve has the advantage that it is independent of disease prevalence. However, in this note, we remind scientists and clinicians that the performance of an assay upon translation to the clinic is critically dependent upon that very same prevalence. Without an understanding of prevalence in the test population, even robust bioassays with excellent ROC characteristics may perform poorly in the clinic. While the exact prevalence in the target population is not always known, simple plots of candidate assay performance as a function of prevalence rate give a better understanding of the likely real‐world performance and a greater understanding of the likely impact of variation in that prevalence on translation to the clinic.  相似文献   

9.
Variable selection for nonlinear regression is a complex problem, made even more difficult when there are a large number of potential covariates and a limited number of datapoints. We propose herein a multi-stage method that combines state-of-the-art techniques at each stage to best discover the relevant variables. At the first stage, an extension of the Bayesian Additive Regression tree is adopted to reduce the total number of variables to around 30. At the second stage, sensitivity analysis in the treed Gaussian process is adopted to further reduce the total number of variables. Two stopping rules are designed and sequential design is adopted to make best use of previous information. We demonstrate our approach on two simulated examples and one real data set.  相似文献   

10.
11.
Suppose that just the lower and the upper bounds on the probability of a measurable subset K in the parameter space ω are a priori known. Instead of eliciting a unique prior probability measure, consider the class Γ of all the probability measures compatible with such bounds. Under mild regularity conditions about the likelihood function, both prior and posterior bounds on the expected value of any function of the unknown parameter ω are computed, as the prior measure varies in Γ. Such bounds are analysed according to the robust Bayesian viewpoint. Furthermore, lower and upper bounds on the Bayes factor are corisidered. Finally, the local sensitivity analysis is performed, considering the class Γ as a aeighbourhood of an elicited prior  相似文献   

12.
The weighted kappa coefficient of a binary diagnostic test (BDT) is a measure of performance of a BDT, and is a function of the sensitivity and the specificity of the diagnostic test, of the disease prevalence and the weighting index. Weighting index represents the relative loss between the false positives and the false negatives. In this study, we propose a new measure of performance of a BDT: the average kappa coefficient. This parameter is the average function of the weighted kappa coefficients and does not depend on the weighting index. We have studied three asymptotic confidence intervals (CIs) for the average kappa coefficient, Wald, logit and bias-corrected bootstrap, and we carried out some simulation experiments to study the asymptotic coverage of each of the three CIs. We have written a program in R, called ‘akcbdt’, to estimate the average kappa coefficient of a BDT. This program is available as supplementary material. The results were applied to two examples.  相似文献   

13.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Estimation of the mean θ of a spherical distribution with prior knowledge concerning the norm ||θ|| is considered. The best equivariant estimator is obtained for the local problem ||θ|| = λ0, and its risk is evaluated. This yields a sharp lower bound for the risk functions of a large class of estimators. The risk functions of the best equivariant estimator and the best linear estimator are compared under departures from the assumption ||θ|| = λ0.  相似文献   

15.
Ipsilateral breast tumor relapse (IBTR) often occurs in breast cancer patients after their breast conservation therapy. The IBTR status' classification (true local recurrence versus new ipsilateral primary tumor) is subject to error and there is no widely accepted gold standard. Time to IBTR is likely informative for IBTR classification because new primary tumor tends to have a longer mean time to IBTR and is associated with improved survival as compared with the true local recurrence tumor. Moreover, some patients may die from breast cancer or other causes in a competing risk scenario during the follow-up period. Because the time to death can be correlated to the unobserved true IBTR status and time to IBTR (if relapse occurs), this terminal mechanism is non-ignorable. In this paper, we propose a unified framework that addresses these issues simultaneously by modeling the misclassified binary outcome without a gold standard and the correlated time to IBTR, subject to dependent competing terminal events. We evaluate the proposed framework by a simulation study and apply it to a real data set consisting of 4477 breast cancer patients. The adaptive Gaussian quadrature tools in SAS procedure NLMIXED can be conveniently used to fit the proposed model. We expect to see broad applications of our model in other studies with a similar data structure.  相似文献   

16.
The accuracy of a binary diagnostic test is usually measured in terms of its sensitivity and its specificity, or through positive and negative predictive values. Another way to describe the validity of a binary diagnostic test is the risk of error and the kappa coefficient of the risk of error. The risk of error is the average loss that is caused when incorrectly classifying a non-diseased or a diseased patient, and the kappa coefficient of the risk of error is a measure of the agreement between the diagnostic test and the gold standard. In the presence of partial verification of the disease, the disease status of some patients is unknown, and therefore the evaluation of a diagnostic test cannot be carried out through the traditional method. In this paper, we have deduced the maximum likelihood estimators and variances of the risk of error and of the kappa coefficient of the risk of error in the presence of partial verification of the disease. Simulation experiments have been carried out to study the effect of the verification probabilities on the coverage of the confidence interval of the kappa coefficient.  相似文献   

17.
Probabilistic sensitivity analysis (SA) allows to incorporate background knowledge on the considered input variables more easily than many other existing SA techniques. Incorporation of such knowledge is performed by constructing a joint density function over the input domain. However, it rarely happens that available knowledge directly and uniquely translates into such a density function. A naturally arising question is then to what extent the choice of density function determines the values of the considered sensitivity measures. In this paper we perform simulation studies to address this question. Our empirical analysis suggests some guidelines, but also cautions to practitioners in the field of probabilistic SA.  相似文献   

18.
The Fourier amplitude sensitivity test (FAST) can be used to calculate the relative variance contribution of model input parameters to the variance of predictions made with functional models. It is widely used in the analyses of complicated process modeling systems. This study provides an improved transformation procedure of the Fourier amplitude sensitivity test (FAST) for non-uniform distributions that can be used to represent the input parameters. Here it is proposed that the cumulative probability be used instead of probability density when transforming non-uniform distributions for FAST. This improvement will increase the accuracy of transformation by reducing errors, and makes the transformation more convenient to be used in practice. In an evaluation of the procedure, the improved procedure was demonstrated to have very high accuracy in comparison to the procedure that is currently widely in use.  相似文献   

19.
An approximation is presented that can be used to gain insight into the characteristics – such as outlier sensitivity, bias, and variability – of a wide class of estimators, including maximum likelihood and least squares. The approximation relies on a convenient form for an arbitrary order Taylor expansion in a multivariate setting. The implicit function theorem can be used to construct the expansion when the estimator is not defined in closed form. We present several finite-sample and asymptotic properties of such Taylor expansions, which are useful in characterizing the difference between the estimator and the expansion.  相似文献   

20.
A 3‐arm trial design that includes an experimental treatment, an active reference treatment, and a placebo is useful for assessing the noninferiority of an experimental treatment. The inclusion of a placebo arm enables the assessment of assay sensitivity and internal validation, in addition to the testing of the noninferiority of the experimental treatment compared with the reference treatment. In 3‐arm noninferiority trials, various statistical test procedures have been considered to evaluate the following 3 hypotheses: (i) superiority of the experimental treatment over the placebo, (ii) superiority of the reference treatment over the placebo, and (iii) noninferiority of the experimental treatment compared with the reference treatment. However, hypothesis (ii) can be insufficient and may not accurately assess the assay sensitivity for the noninferiority of the experimental treatment compared with the reference treatment. Thus, demonstrating that the superiority of the reference treatment over the placebo is greater than the noninferiority margin (the nonsuperiority of the reference treatment compared with the placebo) can be necessary. Here, we propose log‐rank statistical procedures for evaluating data obtained from 3‐arm noninferiority trials to assess assay sensitivity with a prespecified margin Δ. In addition, we derive the approximate sample size and optimal allocation required to minimize the total sample size and that of the placebo treatment sample size, hierarchically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号