首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

2.
The authors present an improved ranked set two‐sample Mann‐Whitney‐Wilcoxon test for a location shift between samples from two distributions F and G. They define a function that measures the amount of information provided by each observation from the two samples, given the actual joint ranking of all the units in a set. This information function is used as a guide for improving the Pitman efficacy of the Mann‐Whitney‐Wilcoxon test. When the underlying distributions are symmetric, observations at their mode(s) must be quantified in order to gain efficiency. Analogous results are provided for asymmetric distributions.  相似文献   

3.
The accuracy of a diagnostic test is typically characterized using the receiver operating characteristic (ROC) curve. Summarizing indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. The authors propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and possibly non‐normal errors are discussed and analyzed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the model that relaxes the assumption of normality, the authors propose a covariate‐adjusted Mann–Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalization of the Mann–Whitney approach for comparing two populations by taking covariate effects into account. The authors derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and mean squared error (MSE) consistency. The MSE of the AUC estimators was also assessed in smaller samples by simulation. Data from an agricultural study were used to illustrate the methods of analysis. The Canadian Journal of Statistics 38:27–46; 2010 © 2009 Statistical Society of Canada  相似文献   

4.
Conventional analyses of a composite of multiple time-to-event outcomes use the time to the first event. However, the first event may not be the most important outcome. To address this limitation, generalized pairwise comparisons and win statistics (win ratio, win odds, and net benefit) have become popular and have been applied to clinical trial practice. However, win ratio, win odds, and net benefit have typically been used separately. In this article, we examine the use of these three win statistics jointly for time-to-event outcomes. First, we explain the relation of point estimates and variances among the three win statistics, and the relation between the net benefit and the Mann–Whitney U statistic. Then we explain that the three win statistics are based on the same win proportions, and they test the same null hypothesis of equal win probabilities in two groups. We show theoretically that the Z-values of the corresponding statistical tests are approximately equal; therefore, the three win statistics provide very similar p-values and statistical powers. Finally, using simulation studies and data from a clinical trial, we demonstrate that, when there is no (or little) censoring, the three win statistics can complement one another to show the strength of the treatment effect. However, when the amount of censoring is not small, and without adjustment for censoring, the win odds and the net benefit may have an advantage for interpreting the treatment effect; with adjustment (e.g., IPCW adjustment) for censoring, the three win statistics can complement one another to show the strength of the treatment effect. For calculations we use the R package WINS, available on the CRAN (Comprehensive R Archive Network).  相似文献   

5.
Through random cut‐points theory, the author extends inference for ordered categorical data to the unspecified continuum underlying the ordered categories. He shows that a random cut‐point Mann‐Whitney test yields slightly smaller p‐values than the conventional test for most data. However, when at least P% of the data lie in one of the k categories (with P = 80 for k = 2, P = 67 for k = 3,…, P = 18 for k = 30), he also shows that the conventional test can yield much smaller p‐values, and hence misleadingly liberal inference for the underlying continuum. The author derives formulas for exact tests; for k = 2, the Mann‐Whitney test is but a binomial test.  相似文献   

6.
The authors give the exact coefficient of 1/N in a saddlepoint approximation to the Wilcoxon‐Mann‐Whitney null‐distribution. This saddlepoint approximation is obtained from an Edgeworth approximation to the exponentially tilted distribution. Moreover, the rate of convergence of the relative error is uniformly of order O (1/N) in a large deviation interval as defined in Feller (1971). The proposed method for computing the coefficient of 1/N can be used to obtain the exact coefficients of 1/Ni, for any i. The exact formulas for the cumulant generating function and the cumulants, needed for these results, are those of van Dantzig (1947‐1950).  相似文献   

7.
By means of a search design one is able to search for and estimate a small set of non‐zero elements from the set of higher order factorial interactions in addition to estimating the lower order factorial effects. One may be interested in estimating the general mean and main effects, in addition to searching for and estimating a non‐negligible effect in the set of 2‐ and 3‐factor interactions, assuming 4‐ and higher‐order interactions are all zero. Such a search design is called a ‘main effect plus one plan’ and is denoted by MEP.1. Construction of such a plan, for 2m factorial experiments, has been considered and developed by several authors and leads to MEP.1 plans for an odd number m of factors. These designs are generally determined by two arrays, one specifying a main effect plan and the other specifying a follow‐up. In this paper we develop the construction of search designs for an even number of factors m, m≠6. The new series of MEP.1 plans is a set of single array designs with a well structured form. Such a structure allows for flexibility in arriving at an appropriate design with optimum properties for search and estimation.  相似文献   

8.
ABSTRACT

A new method is proposed for identifying clusters in continuous data indexed by time or by space. The scan statistic we introduce is derived from the well-known Mann–Whitney statistic. It is completely non parametric as it relies only on the ranks of the marks. This scan test seems to be very powerful against any clustering alternative. These results have applications in various fields, such as the study of climate data or socioeconomic data.  相似文献   

9.
Consider the problem of estimating a dose with a certain response rate. Many multistage dose‐finding designs for this problem were originally developed for oncology studies where the mean dose–response is strictly increasing in dose. In non‐oncology phase II dose‐finding studies, the dose–response curve often plateaus in the range of interest, and there are several doses with the mean response equal to the target. In this case, it is usually of interest to find the lowest of these doses because higher doses might have higher adverse event rates. It is often desirable to compare the response rate at the estimated target dose with a placebo and/or active control. We investigate which of the several known dose‐finding methods developed for oncology phase I trials is the most suitable when the dose–response curve plateaus. Some of the designs tend to spread the allocation among the doses on the plateau. Others, such as the continual reassessment method and the t‐statistic design, concentrate allocation at one of the doses with the t‐statistic design selecting the lowest dose on the plateau more frequently. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
An alternative to conventional rank tests based on a Euclidean distance analysis space is described. Comparisons based on exact probability values among classical two-sample t-tests and the Wilcoxon–Mann–Whitney test illustrate the advantages of the Euclidean distance analysis space alternative.  相似文献   

11.
We consider a novel univariate non parametric cumulative sum (CUSUM) control chart for detecting the small shifts in the mean of a process, where the nominal value of the mean is unknown but some historical data are available. This chart is established based on the Mann–Whitney statistic as well as the change-point model, where any assumption for the underlying distribution of the process is not required. The performance comparisons based on simulations show that the proposed control chart is slightly more effective than some other related non parametric control charts.  相似文献   

12.
The Kolassa method implemented in the nQuery Advisor software has been widely used for approximating the power of the Wilcoxon–Mann–Whitney (WMW) test for ordered categorical data, in which Edgeworth approximation is used to estimate the power of an unconditional test based on the WMW U statistic. When the sample size is small or when the sizes in the two groups are unequal, Kolassa’s method may yield quite poor approximation to the power of the conditional WMW test that is commonly implemented in statistical packages. Two modifications of Kolassa’s formula are proposed and assessed by simulation studies.  相似文献   

13.
Abstract. Non‐parametric regression models have been studied well including estimating the conditional mean function, the conditional variance function and the distribution function of errors. In addition, empirical likelihood methods have been proposed to construct confidence intervals for the conditional mean and variance. Motivated by applications in risk management, we propose an empirical likelihood method for constructing a confidence interval for the pth conditional value‐at‐risk based on the non‐parametric regression model. A simulation study shows the advantages of the proposed method.  相似文献   

14.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

15.
The AUC (area under ROC curve) is a commonly used metric to assess discrimination of risk prediction rules; however, standard errors of AUC are usually based on the Mann–Whitney U test that assumes independence of sampling units. For ophthalmologic applications, it is desirable to assess risk prediction rules based on eye-specific outcome variables which are generally highly, but not perfectly correlated in fellow eyes [e.g. progression of individual eyes to age-related macular degeneration (AMD)]. In this article, we use the extended Mann–Whitney U test (Rosner and Glynn, Biometrics 65:188–197, 2009) for the case where subunits within a cluster may have different progression status and assess discrimination of different prediction rules in this setting. Both data analyses based on progression of AMD and simulation studies show reasonable accuracy of this extended Mann–Whitney U test to assess discrimination of eye-specific risk prediction rules.  相似文献   

16.
Outliers are commonly observed in psychosocial research, generally resulting in biased estimates when comparing group differences using popular mean-based models such as the analysis of variance model. Rank-based methods such as the popular Mann–Whitney–Wilcoxon (MWW) rank sum test are more effective to address such outliers. However, available methods for inference are limited to cross-sectional data and cannot be applied to longitudinal studies under missing data. In this paper, we propose a generalized MWW test for comparing multiple groups with covariates within a longitudinal data setting, by utilizing the functional response models. Inference is based on a class of U-statistics-based weighted generalized estimating equations, providing consistent and asymptotically normal estimates not only under complete but missing data as well. The proposed approach is illustrated with both real and simulated study data.  相似文献   

17.
In this paper, we propose a smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q‐learning algorithm in which nonregular inference is involved, we show that, under assumptions adopted in this paper, the proposed smoothed Q‐learning estimator is asymptotically normally distributed even when the Q‐learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q‐learning estimator is standard. We derive the optimal smoothing parameter and propose a data‐driven method for estimating it. The finite sample properties of the smoothed Q‐learning estimator are studied and compared with several existing estimators including the Q‐learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness–Alzheimer's Disease (CATIE‐AD) study.  相似文献   

18.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
In clinical studies, pairwise comparisons are frequently performed to examine differences in efficacy between treatments. The statistical methods of pairwise comparisons are available when treatment responses are measured on an ordinal scale. The Wilcoxon–Mann–Whitney test and the latent normal model are popular examples. However, these procedures cannot be used to compare treatments in parallel groups (a two-way design) when overall type I error must be controlled. In this paper, we explore statistical approaches to the pairwise testing of treatments that satisfy the requirements of a two-way layout. The results of our simulation indicate that the latent normal approach is superior to the Wilcoxon–Mann–Whitney test. Clinical examples are used to illustrate our suggested testing methods.  相似文献   

20.
In this work, we develop a method of adaptive non‐parametric estimation, based on ‘warped’ kernels. The aim is to estimate a real‐valued function s from a sample of random couples (X,Y). We deal with transformed data (Φ(X),Y), with Φ a one‐to‐one function, to build a collection of kernel estimators. The data‐driven bandwidth selection is performed with a method inspired by Goldenshluger and Lepski (Ann. Statist., 39, 2011, 1608). The method permits to handle various problems such as additive and multiplicative regression, conditional density estimation, hazard rate estimation based on randomly right‐censored data, and cumulative distribution function estimation from current‐status data. The interest is threefold. First, the squared‐bias/variance trade‐off is automatically realized. Next, non‐asymptotic risk bounds are derived. Lastly, the estimator is easily computed, thanks to its simple expression: a short simulation study is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号