首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

2.
Sensitivity and specificity are classic parameters to assess the performance of a binary diagnostic test. Another useful parameter to measure the performance of a binary test is the weighted kappa coefficient, which is a measure of the classificatory agreement between the binary test and the gold standard. Various confidence intervals are proposed for the weighted kappa coefficient when the binary test and the gold standard are applied to all of the patients in a random sample. The results have been applied to the diagnosis of coronary artery disease.  相似文献   

3.
The weighted kappa coefficient of a binary diagnostic test is a measure of the beyond-chance agreement between the diagnostic test and the gold standard, and is a measure that allows us to assess and compare the performance of binary diagnostic tests. In the presence of partial disease verification, the comparison of the weighted kappa coefficients of two or more binary diagnostic tests cannot be carried out ignoring the individuals with an unknown disease status, since the estimators obtained would be affected by verification bias. In this article, we propose a global hypothesis test based on the chi-square distribution to simultaneously compare the weighted kappa coefficients when in the presence of partial disease verification the missing data mechanism is ignorable. Simulation experiments have been carried out to study the type I error and the power of the global hypothesis test. The results have been applied to the diagnosis of coronary disease.  相似文献   

4.
The accuracy of a binary diagnostic test is usually measured in terms of its sensitivity and its specificity, or through positive and negative predictive values. Another way to describe the validity of a binary diagnostic test is the risk of error and the kappa coefficient of the risk of error. The risk of error is the average loss that is caused when incorrectly classifying a non-diseased or a diseased patient, and the kappa coefficient of the risk of error is a measure of the agreement between the diagnostic test and the gold standard. In the presence of partial verification of the disease, the disease status of some patients is unknown, and therefore the evaluation of a diagnostic test cannot be carried out through the traditional method. In this paper, we have deduced the maximum likelihood estimators and variances of the risk of error and of the kappa coefficient of the risk of error in the presence of partial verification of the disease. Simulation experiments have been carried out to study the effect of the verification probabilities on the coverage of the confidence interval of the kappa coefficient.  相似文献   

5.
The kappa coefficient is a widely used measure for assessing agreement on a nominal scale. Weighted kappa is an extension of Cohen's kappa that is commonly used for measuring agreement on an ordinal scale. In this article, it is shown that weighted kappa can be computed as a function of unweighted kappas. The latter coefficients are kappa coefficients that correspond to smaller contingency tables that are obtained by merging categories.  相似文献   

6.
We propose new ensemble approaches to estimate the population mean for missing response data with fully observed auxiliary variables. We first compress the working models according to their categories through a weighted average, where the weights are proportional to the square of the least‐squares coefficients of model refitting. Based on the compressed values, we develop two ensemble frameworks, under which one is to adjust weights in the inverse probability weighting procedure and the other is built upon an additive structure by reformulating the augmented inverse probability weighting function. The asymptotic normality property is established for the proposed estimators through the theory of estimating functions with plugged‐in nuisance parameter estimates. Simulation studies show that the new proposals have substantial advantages over existing ones for small sample sizes, and an acquired immune deficiency syndrome data example is used for illustration.  相似文献   

7.
Partially linear varying coefficient models (PLVCMs) with heteroscedasticity are considered in this article. Based on composite quantile regression, we develop a weighted composite quantile regression (WCQR) to estimate the non parametric varying coefficient functions and the parametric regression coefficients. The WCQR is augmented using a data-driven weighting scheme. Moreover, the asymptotic normality of proposed estimators for both the parametric and non parametric parts are studied explicitly. In addition, by comparing the asymptotic relative efficiency theoretically and numerically, WCQR method all outperforms the CQR method and some other estimate methods. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components for the PLVCM and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite-sample performance of the proposed methods.  相似文献   

8.
This article evaluates the performance of a few newly proposed online forecast combination algorithms and compares them with some of the existing ones including the simple average and that of Bates and Granger (1969). We derive asymptotic results for the new algorithms that justify certain established approaches to forecast combination including trimming, clustering, weighting, and shrinkage. We also show that when implemented on unbalanced panels, different combination algorithms implicitly impute missing data differently, so that the performance of the resulting combined forecasts are not comparable. After explicitly imputing the missing observations in the U.S. Survey of Professional Forecasters (SPF) over 1968 IV-2013 I, we find that the equally weighted average continues to be hard to beat, but the new algorithms can potentially deliver superior performance at shorter horizons, especially during periods of volatility clustering and structural breaks.  相似文献   

9.
In this paper, we propose a general kth correlation coefficient between the density function and distribution function of a continuous variable as a measure of symmetry and asymmetry. We first propose a root-n moment-based estimator of the kth correlation coefficient and present its asymptotic results. Next, we consider statistical inference of the kth correlation coefficient by using the empirical likelihood (EL) method. The EL statistic is shown to be asymptotically a standard chi-squared distribution. Last, we propose a residual-based estimator of the kth correlation coefficient for a parametric regression model to test whether the density function of the true model error is symmetric or not. We present the asymptotic results of the residual-based kth correlation coefficient estimator and also construct its EL-based confidence intervals. Simulation studies are conducted to examine the performance of the proposed estimators, and we also use our proposed estimators to analyze the air quality dataset.  相似文献   

10.
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one.  相似文献   

11.
In this paper we study the weighted least absolute deviations estimator (WLADE) for Periodic ARMA (PARMA) models with unspecified noises, asymptotic normality of the estimator is derived. The random weighting approach is proposed to estimate the asymptotic covariance matrices of WLADE. Simulation study is carried out to examine the performance of the proposed procedure.  相似文献   

12.
Summary. We investigate the operating characteristics of the Benjamini–Hochberg false discovery rate procedure for multiple testing. This is a distribution-free method that controls the expected fraction of falsely rejected null hypotheses among those rejected. The paper provides a framework for understanding more about this procedure. We first study the asymptotic properties of the `deciding point' D that determines the critical p -value. From this, we obtain explicit asymptotic expressions for a particular risk function. We introduce the dual notion of false non-rejections and we consider a risk function that combines the false discovery rate and false non-rejections. We also consider the optimal procedure with respect to a measure of conditional risk.  相似文献   

13.
This article investigates an efficient estimation method for a class of switching regressions based on the characteristic function (CF). We show that with the exponential weighting function, the CF-based estimator can be achieved from minimizing a closed form distance measure. Due to the availability of the analytical structure of the asymptotic covariance, an iterative estimation procedure is developed involving the minimization of a precision measure of the asymptotic covariance matrix. Numerical examples are illustrated via a set of Monte Carlo experiments examining the implementation, finite sample property and the efficiency of the proposed estimator.  相似文献   

14.
We present families of nonparametric estimators for the conditional tail index of a Pareto-type distribution in the presence of random covariates. These families are constructed from locally weighted sums of power transformations of excesses over a high threshold. The asymptotic properties of the proposed estimators are derived under some assumptions on the conditional response distribution, the weight function and the density function of the covariates. We also introduce bias-corrected versions of the estimators for the conditional tail index, and propose in this context a consistent estimator for the second-order tail parameter. The finite sample performance of some specific examples from our classes of estimators is illustrated with a small simulation experiment.  相似文献   

15.
This paper deals with the estimation of the error distribution function in a varying coefficient regression model. We propose two estimators and study their asymptotic properties by obtaining uniform stochastic expansions. The first estimator is a residual-based empirical distribution function. We study this estimator when the varying coefficients are estimated by under-smoothed local quadratic smoothers. Our second estimator which exploits the fact that the error distribution has mean zero is a weighted residual-based empirical distribution whose weights are chosen to achieve the mean zero property using empirical likelihood methods. The second estimator improves on the first estimator. Bootstrap confidence bands based on the two estimators are also discussed.  相似文献   

16.
We investigate the small-sample properties of three alternative generalized method of moments (GMM) estimators of asset-pricing models. The estimators that we consider include ones in which the weighting matrix is iterated to convergence and ones in which the weighting matrix is changed with each choice of the parameters. Particular attention is devoted to assessing the performance of the asymptotic theory for making inferences based directly on the deterioration of GMM criterion functions.  相似文献   

17.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   

18.
We consider a class of closed multiple test procedures indexed by a fixed weight vector. The class includes the Holm weighted step-down procedure, the closed method using the weighted Fisher combination test, and the closed method using the weighted version of Simes’ test. We show how to choose weights to maximize average power, where “average power” is itself weighted by importance assigned to the various hypotheses.Numerical computations suggest that the optimal weights for the multiple test procedures tend to certain asymptotic configurations. These configurations offer numerical justification for intuitive multiple comparisons methods, such as downweighting variables found insignificant in preliminary studies, giving primary variables more emphasis, gatekeeping test strategies, pre-determined multiple testing sequences, and pre-determined sequences of families of tests. We establish that such methods fall within the envelope of weighted closed testing procedures, thus providing a unified view of fixed sequences, fixed sequences of families, and gatekeepers within the closed testing paradigm. We also establish that the limiting cases control the familywise error rate (or FWE), using well-known results about closed tests, along with the dominated convergence theorem.  相似文献   

19.
We propose a test based on Bonferroni's measure of skewness. The test detects the asymmetry of a distribution function about an unknown median. We study the asymptotic distribution of the given test statistic and provide a consistent estimate of its variance. The asymptotic relative efficiency of the proposed test is computed along with Monte Carlo estimates of its power. This allows us to perform a comparison of the test based on Bonferroni's measure with other tests for symmetry.  相似文献   

20.
The popular diagnostic checking methods in linear time series models are portmanteau tests based on either residual autocorrelation functions (acf) or partial autocorrelation functions (pacf). In this paper, we device some new weighted mixed portmanteau tests by appropriately combining individual tests based on both acf and pacf. We derive the asymptotic distribution of such weighted mixed portmanteau statistics and study their size and power. It is found that the weighted mixed tests outperform when higher order ARMA models are fitted and diagnostic checks are performed via testing lack of residual autocorrelations. Simulation results suggest to use the proposed tests as complementary to those classical tests found in literature. An illustrative application is given to demonstrate the usefulness of the mixed test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号