首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
We studied several test statistics for testing the equality of marginal survival functions of paired censored data. The null distribution of the test statistics was approximated by permutation. These tests do not require explicit modeling or estimation of the within-pair correlation, accommodate both paired data and singletons, and the computation is straightforward with most statistical software. Numerical studies showed that these tests have competitive size and power performance. One test statistic has higher power than previously published test statistics when the two survival functions under comparison cross. We illustrate use of these tests in a propensity score matched dataset.  相似文献   

2.
In this paper we consider the problem of testing hypotheses in parametric models, when only the first r (of n) ordered observations are known.Using divergence measures, a procedure to test statistical hypotheses is proposed, Replacing the parameters by suitable estimators in the expresion of the divergence measure, the test statistics are obtained.Asymptotic distributions for these statistics are given in several cases when maximum likelihood estimators for truncated samples are considered.Applications of these results in testing statistical hypotheses, on the basis of truncated data, are presented.The small sample behavior of the proposed test statistics is analyzed in particular cases.A comparative study of power values is carried out by computer simulation.  相似文献   

3.
In this paper, we present certain statistical tests under a staggered nested design set-up, for the hypotheses that certain variance components are zero. To do so, the particular variance-covariance, structure induced by the staggering is exploited and certain results of multivariate analysis are used. In most problems, the test statistics can be easily computed. An example is provided for illustration and some power computations for comparison of test statistics are shown.  相似文献   

4.
Selecting predictors to optimize the outcome prediction is an important statistical method. However, it usually ignores the false positives in the selected predictors. In this article, we advocate a conventional stepwise forward variable selection method based on the predicted residual sum of squares, and develop a positive false discovery rate (pFDR) estimate for the selected predictor subset, and a local pFDR estimate to prioritize the selected predictors. This pFDR estimate takes account of the existence of non null predictors, and is proved to be asymptotically conservative. In addition, we propose two views of a variable selection process: an overall and an individual test. An interesting feature of the overall test is that its power of selecting non null predictors increases with the proportion of non null predictors among all candidate predictors. Data analysis is illustrated with an example, in which genetic and clinical predictors were selected to predict the cholesterol level change after four months of tamoxifen treatment, and pFDR was estimated. Our method's performance is evaluated through statistical simulations.  相似文献   

5.
This paper explores in high-dimensional settings how to test the equality of two location vectors. We introduce a rank-based projection test under elliptical symmetry. Optimal projection direction is derived according to asymptotically and locally best power criteria. Data-splitting strategy is used to estimate optimal projection and construct test statistics. The limiting null distribution and power function of the proposed statistics are thoroughly investigated under some mild assumptions. The test is shown to keep type I error rates pretty well and outperforms several existing methods in a broad range of settings, especially in the presence of large correlation structures. Simulation studies are conducted to confirm the asymptotic results and a real data example is applied to demonstrate the advantage of the proposed procedure.  相似文献   

6.
Usual tests for trends stand under null hypothesis. This article presents a test of non null hypothesis for linear trends in proportions. A weighted least squares method is used to estimate the regression coefficient of proportions. A non null hypothesis is defined as its expectation equal to a prescribed regression coefficient margin. Its variance is used to construct an equation of basic relationship for linear trends in proportions along the asymptotic normal method. Then follow derivations for the sample size formula, the power function, and the test statistic. The expected power is obtained from the power function and the observed power is exhibited by Monte Carlo method. It reduces to the classical test for linear trends in proportions on setting the margin equal to zero. The agreement between the expected and the observed power is excellent. It is the non null hypothesis test matched with the classical test and can be applied to assess the clinical significance of trends among several proportions. By contrast, the classical test is restricted in testing the statistical significance. A set of data from a website is used to illustrate the methodology.  相似文献   

7.
The aim of this article is twofold: on the one hand to introduce and study some of the statistical properties of an estimator for the Shannon entropy and on the other hand to develop a goodness-of-fit test for beta-generated distributions and the distribution of order statistics. Beta-generated distributions are a broad class of univariate distributions which has received great attention during the last 15 years, as it obeys nice properties and it extends the distribution of order statistics. The proposed estimator of Shannon entropy of beta-generated distributions is motivated by the respective Vasicek’s estimator, as the latter one is tailored to the class of the beta-generated distributions and the distribution of order statistics. The estimator of Shannon entropy is defined and its consistency is studied. It is, moreover, exploited to build a goodness-of-fit test for the beta-generated distribution and the distribution of order statistics. Simulations are performed to examine the small- and moderate-sample properties of the proposed estimator and to compare the power of the proposed test with the power of competitors under a variety of alternatives.  相似文献   

8.
Multivariate techniques of O'Brien's OLS and GLS statistics are discussed in the context of their application in clinical trials. We introduce the concept of an operational effect size and illustrate its use to evaluate power. An extension describing how to handle covariates and missing data is developed in the context of Mixed models. This extension allowing adjustment for covariates is easily programmed in any statistical package including SAS. Monte Carlo simulation is used for a number of different sample sizes to compare the actual size and power of the tests based on O'Brien's OLS and GLS statistics.  相似文献   

9.
叶光 《统计研究》2011,28(3):99-106
 针对完全修正最小二乘(full-modified ordinary least square,简称FMOLS)估计方法,给出一种协整参数的自举推断程序,证明零假设下自举统计量与检验统计量具有相同的渐近分布。关于检验功效的研究表明,虽然有约束自举的实际检验水平表现良好,但如果零假设不成立,自举统计量的分布是不确定的,因而其经验分布不能作为检验统计量精确分布的有效估计。实际应用中建议使用无约束自举,因为无论观测数据是否满足零假设,其自举统计量与零假设下检验统计量都具有相同的渐近分布。最后,利用蒙特卡洛模拟对自举推断和渐近推断的有限样本表现进行比较研究。  相似文献   

10.
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.  相似文献   

11.
In clinical trials survival endpoints are usually compared using the log-rank test. Sequential methods for the log-rank test and the Cox proportional hazards model are largely reported in the statistical literature. When the proportional hazards assumption is violated the hazard ratio is ill-defined and the power of the log-rank test depends on the distribution of the censoring times. The average hazard ratio was proposed as an alternative effect measure, which has a meaningful interpretation in the case of non-proportional hazards, and is equal to the hazard ratio, if the hazards are indeed proportional. In the present work we prove that the average hazard ratio based sequential test statistics are asymptotically multivariate normal with the independent increments property. This allows for the calculation of group-sequential boundaries using standard methods and existing software. The finite sample characteristics of the new method are examined in a simulation study in a proportional and a non-proportional hazards setting.  相似文献   

12.
Van Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon age and the rate of extinction. We propose a general class of test statistics that can be used to test for the effect of age on the rate of extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead we control for covariate effects by pairing or grouping together similar species. Simulations are used to compare the power of the statistics. We apply the test statistics to data on Foram extinctions and find that age has a positive effect on the rate of extinction. A derivation of the null distribution of one of the test statistics is provided in the supplementary material.  相似文献   

13.
This article proposes a class of weighted differences of averages (WDA) statistics to test and estimate possible change-points in variance for time series with weakly dependent blocks and dependent panel data without specific distributional assumptions. We derive the asymptotic distributions of the test statistics for testing the existence of a single variance change-point under the null and local alternatives. We also study the consistency of the change-point estimator. Within the proposed class of the WDA test statistics, a standardized WDA test is shown to have the best consistency rate and is recommended for practical use. An iterative binary searching procedure is suggested for estimating the locations of possible multiple change-points in variance, whose consistency is also established. Simulation studies are conducted to compare detection power and number of wrong rejections of the proposed procedure to that of a cumulative sum (CUSUM) based test and a likelihood ratio-based test. Finally, we apply the proposed method to a stock index dataset and an unemployment rate dataset. Supplementary materials for this article are available online.  相似文献   

14.
The authors deal with the problem of comparing receiver operating characteristic (ROC) curves from independent samples. From a nonparametric approach, they propose and study three different statistics. Their asymptotic distributions are obtained and a resample plan is considered. In order to study the statistical power of the introduced statistics, a simulation study is carried out. The (observed) results suggest that, for the considered models, the new statistics are more powerful than the usually employed ones (the Venkatraman test and the usual area under the ROC curve criterion) in non-uniform dominance situations and quite good otherwise.  相似文献   

15.
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1?λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.  相似文献   

16.
Recent research on finding appropriate composite endpoints for preclinical Alzheimer's disease has focused considerable effort on finding “optimized” weights in the construction of a weighted composite score. In this paper, several proposed methods are reviewed. Our results indicate no evidence that these methods will increase the power of the test statistics, and some of these weights will introduce biases to the study. Our recommendation is to focus on identifying more sensitive items from clinical practice and appropriate statistical analyses of a large Alzheimer's data set. Once a set of items has been selected, there is no evidence that adding weights will generate more sensitive composite endpoints.  相似文献   

17.
The use of the Cox proportional hazards regression model is wide-spread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.  相似文献   

18.
Several procedures have been proposed for testing the hypothesis that all off-diagonal elements of the correlation matrix of a multivariate normal distribution are equal. If the hypothesis of equal correlation can be accepted, it is then of interest to estimate and perhaps test hypotheses for the common correlation. In this paper, two versions of five different test statistics are compared via simulation in terms of adequacy of the normal approximation, coverage probabilities of confidence intervals, control of Type I error, and power. The results indicate that two test statistics based on the average of the Fisher z-transforms of the sample correlations should be used in most cases. A statistic based on the sample eigenvalues also gives reasonable results for confidence intervals and lower-tailed tests.  相似文献   

19.
We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood approach. We treat the problems of existence and characterization of the divergence projections of probability distributions on sets of signed finite measures. We give a precise characterization of duality, for the proposed class of estimates and test statistics, which is used to derive their limiting distributions (including the EL estimate and the EL ratio statistic) both under the null hypotheses and under alternatives or misspecification. An approximation to the power function is deduced as well as the sample size which ensures a desired power for a given alternative.  相似文献   

20.
In this paper, we suggest an extension of the cumulative residual entropy (CRE) and call it generalized cumulative entropy. The proposed entropy not only retains attributes of the existing uncertainty measures but also possesses the absolute homogeneous property with unbounded support, which the CRE does not have. We demonstrate its mathematical properties including the entropy of order statistics and the principle of maximum general cumulative entropy. We also introduce the cumulative ratio information as a measure of discrepancy between two distributions and examine its application to a goodness-of-fit test of the logistic distribution. Simulation study shows that the test statistics based on the cumulative ratio information have comparable statistical power with competing test statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号