首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
统计学   9篇
  2018年   1篇
  2015年   1篇
  2013年   3篇
  2012年   2篇
  2011年   1篇
  1999年   1篇
排序方式: 共有9条查询结果,搜索用时 15 毫秒
1
1.
Peto and Peto (1972) have studied rank invariant tests to compare two survival curves for right censored data. We apply their tests, including the logrank test and the generalized Wilcoxon test, to left truncated and interval censored data. The significance levels of the tests are approximated by Monte Carlo permutation tests. Simulation studies are conducted to show their size and power under different distributional differences. In particular, the logrank test works well under the Cox proportional hazards alternatives, as for the usual right censored data. The methods are illustrated by the analysis of the Massachusetts Health Care Panel Study dataset.  相似文献   
2.
In a clinical trial with the time to an event as the outcome of interest, we may randomize a number of matched subjects, such as litters, to different treatments. The number of treatments equals the number of subjects per litter, two in the case of twins. In this case, the survival times of matched subjects could be dependent. Although the standard rank tests, such as the logrank and Wilcoxon tests, for independent samples may be used to test the equality of marginal survival distributions, their standard error should be modified to accommodate the possible dependence of survival times between matched subjects. In this paper we propose a method of calculating the standard error of the rank tests for paired two-sample survival data. The method is naturally extended to that for K-sample tests under dependence.  相似文献   
3.
We describe a class of rank test procedures for the two-sample problem with right censored survival data. The class of tests is directly generalized from the linear rank tests by assigning each observation a rank according to its corresponding Wilcoxon scores. It allows a flexible choice of score functions, in particular, those powerful against scale differences between the two survival distributions. Monte Carlo simulations have shown that some members of this class have great power in detecting crossing-curve alternatives (alternatives where underlying survival curves cross over). The class also contains tests essentially equivalent to the Gehan-Wilcoxon and the logrank tests.  相似文献   
4.
Sample size determination is essential during the planning phases of clinical trials. To calculate the required sample size for paired right-censored data, the structure of the within-paired correlations needs to be pre-specified. In this article, we consider using popular parametric copula models, including the Clayton, Gumbel, or Frank families, to model the distribution of joint survival times. Under each copula model, we derive a sample size formula based on the testing framework for rank-based tests and non-rank-based tests (i.e., logrank test and Kaplan–Meier statistic, respectively). We also investigate how the power or the sample size was affected by the choice of testing methods and copula model under different alternative hypotheses. In addition to this, we examine the impacts of paired-correlations, accrual times, follow-up times, and the loss to follow-up rates on sample size estimation. Finally, two real-world studies are used to illustrate our method and R code is available to the user.  相似文献   
5.
For data subject to right censoring it is suggested that the Wilcoxon ranking procedure can be generalized by scoring observations according to the expected values of order statistics from the uniform distribution subject to the same right censoring. This parallels the logrank scoring procedure in which scores correspond to the expected values of order statistics from the exponential distribution that have been subject to right censoring. A caveat is given that, in situations where the mechanism of censoring has been affected by treatment, the usual permutational analysis of ranking scores would be inappropriate. But a jackknife approach could be remedial.  相似文献   
6.
In tumorigenicity experiments, each animal begins in a tumor-free state and then either develops a tumor or dies before developing a tumor. Animals that develop a tumor either die from the tumor or from other competing causes. All surviving animals are sacrificed at the end of the experiment, normally two years. The two most commonly used statistical tests are the logrank test for comparing hazards of death from rapidly lethal tumors and the Hoel-Walburg test for comparing prevalences of nonlethal tumors. However, the data obtained from a carcinogenicity experiment generally contains a mixture of fatal and incidental tumors. Peto et al.(1980)suggested combining the fatal and incidental tests for a comparison of tumor onset distributions.

Extensive simulations show that the trend test for tumor onset using the Peto procedure has the proper size, under the simulation constraints, when each group has identical mortality patterns, and the test with continuity correction tends to be conservative. When the animals n the dosed groups have reduced survival rates, the type I error rate is likely to exceed the nominal level. The continuity correction is recommended for a small reduction in survival time among the dosed groups to ensure the proper size. However, when there is a large reduction in survival times in the dosed groups, the onset test does not have the proper size.  相似文献   
7.

In this paper ANOVA test procedures based on weighted transformations of the cumulative hazard are discussed. These procedures may be applied in situations where the observations are censored and/or truncated. Besides, the techniques examined are flexible thanks to the choice of different transformations and weight functions. The popular logrank test is used as a yardstick in the performance evaluation.  相似文献   
8.
For time‐to‐event data, the power of the two sample logrank test for the comparison of two treatment groups can be greatly influenced by the ratio of the number of patients in each of the treatment groups. Despite the possible loss of power, unequal allocations may be of interest due to a need to collect more data on one of the groups or to considerations related to the acceptability of the treatments to patients. Investigators pursuing such designs may be interested in the cost of the unbalanced design relative to a balanced design with respect to the total number of patients required for the study. We present graphical displays to illustrate the sample size adjustment factor, or ratio of the sample size required by an unequal allocation compared to the sample size required by a balanced allocation, for various survival rates, treatment hazards ratios, and sample size allocation ratios. These graphical displays conveniently summarize information in the literature and provide a useful tool for planning sample sizes for the two sample logrank test. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
9.
Kendall and Gehan estimating functions are commonly used to estimate the regression parameter in accelerated failure time model with censored observations in survival analysis. In this paper, we apply the jackknife empirical likelihood method to overcome the computation difficulty about interval estimation. A Wilks’ theorem of jackknife empirical likelihood for U-statistic type estimating equations is established, which is used to construct the confidence intervals for the regression parameter. We carry out an extensive simulation study to compare the Wald-type procedure, the empirical likelihood method, and the jackknife empirical likelihood method. The proposed jackknife empirical likelihood method has a better performance than the existing methods. We also use a real data set to compare the proposed methods.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号