首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

2.
3.
This paper examines the effect of correlation of observations on estimators of a mean which are designed to guard against the possibility of spurious observations (that is, observations generated in a manner not intended). The mean squared error, premium and protection of these estimators are evaluated and discussed for some specific correlation structures.  相似文献   

4.
Various criteria have been proposed for determining the reliability of noncompartmental pharmacokinetic estimates of the terminal disposition phase half‐life (t1/2) and the extrapolated area under the curve (AUCextrap). This simulation study assessed the performance of two frequently used reportability rules: the terminal disposition phase regression adjusted‐r2 classification rule and the regression data point time span classification rule. Using simulated data, these rules were assessed in relation to the magnitude of the variability in the terminal disposition phase slope, the length of the terminal disposition phase captured in the concentration‐time profile (data span), the number of data points present in the terminal disposition phase, and the type and level of variability in concentration measurement. The accuracy of estimating t1/2 was satisfactory for data spans of 1.5 and longer, given low measurement variability; and for spans of 2.5 and longer, given high measurement variability. Satisfactory accuracy in estimating AUCextrap was only achieved with low measurement variability and spans of 2.5 and longer. Neither of the classification rules improved the identification of accurate t1/2 and AUCextrap estimates. Based on the findings of this study, a strategy is proposed for determining the reportability of estimates of t1/2 and area under the curve extrapolated to infinity.  相似文献   

5.
6.
Abstract

In diagnostic trials, clustered data are obtained when several subunits of the same patient are observed. Intracluster correlations need to be taken into account when analyzing such clustered data. A nonparametric method has been proposed by Obuchowski (1997 Obuchowski, N. A. 1997. Nonparametric analysis of clustered ROC curve data. Biometrics 53 (2):56778.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) to estimate the Receiver Operating Characteristic curve area (AUC) for such clustered data. However, Obuchowski’s estimator is not efficient as it gives equal weight to all pairwise rankings within and between cluster. In this paper, we propose a more efficient nonparametric AUC estimator with two sets of optimal weights. Simulation results show that the loss of efficiency of Obuchowski’s estimator for a single AUC or the AUC difference can be substantial when there is a moderate intracluster test correlation and the cluster size is large. The efficiency gain of our weighted AUC estimator for a single AUC or the AUC difference is further illustrated using the data from a study of screening tests for neonatal hearing.  相似文献   

7.
Multivariate control charts are used to monitor stochastic processes for changes and unusual observations. Hotelling's T2 statistic is calculated for each new observation and an out‐of‐control signal is issued if it goes beyond the control limits. However, this classical approach becomes unreliable as the number of variables p approaches the number of observations n, and impossible when p exceeds n. In this paper, we devise an improvement to the monitoring procedure in high‐dimensional settings. We regularise the covariance matrix to estimate the baseline parameter and incorporate a leave‐one‐out re‐sampling approach to estimate the empirical distribution of future observations. An extensive simulation study demonstrates that the new method outperforms the classical Hotelling T2 approach in power, and maintains appropriate false positive rates. We demonstrate the utility of the method using a set of quality control samples collected to monitor a gas chromatography–mass spectrometry apparatus over a period of 67 days.  相似文献   

8.
The area under the receiver operating characteristic (ROC) curve (AUC) is broadly accepted and often used as a diagnostic accuracy index. Moreover, the equality among the predictive capacity of two or more diagnostic systems is frequently checked from the comparison of their respective AUCs. In paired designs, this comparison is usually performed by using only the subjects who have collected all the necessary information, in the so-called available-case analysis. On the other hand, the presence of missing data is a frequent problem, especially in retrospective and observational studies. The loss of statistical power and the misuse of the available information (with the resulting ethical implications) are the main consequences. In this paper a non-parametric method is developed to exploit all available information. In order to approximate the distribution for the proposed statistic, the asymptotic distribution is computed and two different resampling plans are studied. In addition, the methodology is applied to a real-world medical problem. Finally, some technical issues are also reported in the Appendix.  相似文献   

9.
Dependent and often incomplete outcomes are commonly found in longitudinal biomedical studies. We develop a likelihood function, which implements the autoregressive process of outcomes, incorporating the limit of detection problem and the probability of drop-out. The proposed approach incorporates the characteristics of the longitudinal data in biomedical research allowing us to carry out powerful tests to detect a difference between study populations in terms of the growth rate and drop-out rate. The formal notation of the likelihood function is developed, making it possible to adapt the proposed method easily for various different scenarios in terms of the number of groups to compare and a variety of growth trend patterns. Useful inferential properties for the proposed method are established, which take advantage of many well-developed theorems regarding the likelihood approach. A broad Monte-Carlo study confirms both the asymptotic results and illustrates good power properties of the proposed method. We apply the proposed method to three data sets obtained from mouse tumor experiments.  相似文献   

10.
11.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

12.
The authors develop empirical likelihood (EL) based methods of inference for a common mean using data from several independent but nonhomogeneous populations. For point estimation, they propose a maximum empirical likelihood (MEL) estimator and show that it is n‐consistent and asymptotically optimal. For confidence intervals, they consider two EL based methods and show that both intervals have approximately correct coverage probabilities under large samples. Finite‐sample performances of the MEL estimator and the EL based confidence intervals are evaluated through a simulation study. The results indicate that overall the MEL estimator and the weighted EL confidence interval are superior alternatives to the existing methods.  相似文献   

13.
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. Longitudinal data are often analyzed through the generalized estimating equations (GEE) approach. The vast majority of existing literature on the GEE method; however, is developed under non‐survey settings and are inappropriate for data collected through complex sampling designs. In this paper the authors develop a pseudo‐GEE approach for the analysis of survey data. They show that survey weights must and can be appropriately accounted in the GEE method under a joint randomization framework. The consistency of the resulting pseudo‐GEE estimators is established under the proposed framework. Linearization variance estimators are developed for the pseudo‐GEE estimators when the finite population sampling fractions are small or negligible, a scenario often held for large‐scale surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study using data from the National Longitudinal Survey of Children and Youth. The results show that the pseudo‐GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous and binary responses. The Canadian Journal of Statistics 38: 540–554; 2010 © 2010 Statistical Society of Canada  相似文献   

14.
Intent‐to‐treat (ITT) analysis is viewed as the analysis of a clinical trial that provides the least bias, but difficult issues can arise. Common analysis methods such as mixed‐effects and proportional hazards models are usually labeled as ITT analysis, but in practice they can often be inconsistent with a strict interpretation of the ITT principle. In trials where effective medications are available to patients withdrawing from treatment, ITT analysis can mask important therapeutic effects of the intervention studied in the trial. Analysis of on‐treatment data may be subject to bias, but can address efficacy objectives when combined with careful review of the pattern of withdrawals across treatments particularly for those patients withdrawing due to lack of efficacy and adverse events. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Clinical work is characterised by frequent interjection of external prompts causing clinicians to switch from a primary task to deal with an incoming secondary task, a phenomenon associated with negative effects in experimental studies. This is an important yet underexplored aspect of work in safety critical settings in general, since an increase in task length due to task‐switching implies reduced efficiency, while decreased length suggests hastening to compensate for the increased workload brought by the unexpected secondary tasks, which is a potential safety issue. In such observational settings, longer tasks are naturally more likely to have one or more task‐switching events: a form of length bias. To assess the effect of task‐switching on task completion time, it is necessary to estimate counterfactual task lengths had they not experienced any task‐switching, while also accounting for length bias. This is a problem that appears simple at first, but has several counterintuitive considerations resulting in a uniquely constrained solution space. We review the only existing method based on an assumption that task‐switches occur according to a homogeneous Poisson process. We propose significant extensions to flexibly incorporate heterogeneity that is more representative of task‐switching in real‐world contexts. The techniques are applied to observations of emergency physicians’ workflow in two hospital settings.  相似文献   

16.
17.
Formulae are provided that define the ‘bend points’, the beginning and end of the essentially linear dose–response region, for the four‐parameter logistic model. The formulae are expressed in both response and dose units. The derivation of the formulae is shown in order to illustrate the general nature of the methodology. Examples are given that describe how the formulae may be used while planning and conducting bioassays. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
This paper considers estimation of a exponential mean time to failure using a loss function that reflects both goodness of fit and precision of estimation. The admissibility and inadmissibility of a class of linear estimators of the form are studied.  相似文献   

19.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.  相似文献   

20.
The problem of spuriousity has been dealt with from a Bayesian perspective by, among others, Box and Taio (1968) and in several papers by Guttman with various co-authors, beginning with Guttman (1973), The main objective of these papers has been to obtain posterior distributions of parameters, and to base inference on these distributions. In the current paper, the Bayesian argument is carried one step further by deriving predictive distributions of future observations. Inferences are then based on these distributions. We will obtain predictive results for several models, First, we consider the univariate normal case with one spurious observation, This is then generalized to several spurious observations. The multivariate normal situation is studied next. Finally, we consider the general linear model with normal errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号