首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In tumorigenicity experiments, each animal begins in a tumor-free state and then either develops a tumor or dies before developing a tumor. Animals that develop a tumor either die from the tumor or from other competing causes. All surviving animals are sacrificed at the end of the experiment, normally two years. The two most commonly used statistical tests are the logrank test for comparing hazards of death from rapidly lethal tumors and the Hoel-Walburg test for comparing prevalences of nonlethal tumors. However, the data obtained from a carcinogenicity experiment generally contains a mixture of fatal and incidental tumors. Peto et al.(1980)suggested combining the fatal and incidental tests for a comparison of tumor onset distributions.

Extensive simulations show that the trend test for tumor onset using the Peto procedure has the proper size, under the simulation constraints, when each group has identical mortality patterns, and the test with continuity correction tends to be conservative. When the animals n the dosed groups have reduced survival rates, the type I error rate is likely to exceed the nominal level. The continuity correction is recommended for a small reduction in survival time among the dosed groups to ensure the proper size. However, when there is a large reduction in survival times in the dosed groups, the onset test does not have the proper size.  相似文献   

2.
A new statistical approach is developed for estimating the carcinogenic potential of drugs and other chemical substances used by humans. Improved statistical methods are developed for rodent tumorigenicity assays that have interval sacrifices but not cause-of-death data. For such experiments, this paper proposes a nonparametric maximum likelihood estimation method for estimating the distributions of the time to onset of and the time to death from the tumour. The log-likelihood function is optimized using a constrained direct search procedure. Using the maximum likelihood estimators, the number of fatal tumours in an experiment can be imputed. By applying the procedure proposed to a real data set, the effect of calorie restriction is investigated. In this study, we found that calorie restriction delays the tumour onset time significantly for pituitary tumours. The present method can result in substantial economic savings by relieving the need for a case-by-case assignment of the cause of death or context of observation by pathologists. The ultimate goal of the method proposed is to use the imputed number of fatal tumours to modify Peto's International Agency for Research on Cancer test for application to tumorigenicity assays that lack cause-of-death data.  相似文献   

3.
We consider seven exact unconditional testing procedures for comparing adjusted incidence rates between two groups from a Poisson process. Exact tests are always preferable due to the guarantee of test size in small to medium sample settings. Han [Comparing two independent incidence rates using conditional and unconditional exact tests. Pharm Stat. 2008;7(3):195–201] compared the performance of partial maximization p-values based on the Wald test statistic, the likelihood ratio test statistic, the score test statistic, and the conditional p-value. These four testing procedures do not perform consistently, as the results depend on the choice of test statistics for general alternatives. We consider the approach based on estimation and partial maximization, and compare these to the ones studied by Han (2008) for testing superiority. The procedures are compared with regard to the actual type I error rate and power under various conditions. An example from a biomedical research study is provided to illustrate the testing procedures. The approach based on partial maximization using the score test is recommended due to the comparable performance and computational advantage in large sample settings. Additionally, the approach based on estimation and partial maximization performs consistently for all the three test statistics, and is also recommended for use in practice.  相似文献   

4.
The importance of the normal distribution for fitting continuous data is well known. However, in many practical situations data distribution departs from normality. For example, the sample skewness and the sample kurtosis are far away from 0 and 3, respectively, which are nice properties of normal distributions. So, it is important to have formal tests of normality against any alternative. D'Agostino et al. [A suggestion for using powerful and informative tests of normality, Am. Statist. 44 (1990), pp. 316–321] review four procedures Z 2(g 1), Z 2(g 2), D and K 2 for testing departure from normality. The first two of these procedures are tests of normality against departure due to skewness and kurtosis, respectively. The other two tests are omnibus tests. An alternative to the normal distribution is a class of skew-normal distributions (see [A. Azzalini, A class of distributions which includes the normal ones, Scand. J. Statist. 12 (1985), pp. 171–178]). In this paper, we obtain a score test (W) and a likelihood ratio test (LR) of goodness of fit of the normal regression model against the skew-normal family of regression models. It turns out that the score test is based on the sample skewness and is of very simple form. The performance of these six procedures, in terms of size and power, are compared using simulations. The level properties of the three statistics LR, W and Z 2(g 1) are similar and close to the nominal level for moderate to large sample sizes. Also, their power properties are similar for small departure from normality due to skewness (γ1≤0.4). Of these, the score test statistic has a very simple form and computationally much simpler than the other two statistics. The LR statistic, in general, has highest power, although it is computationally much complex as it requires estimates of the parameters under the normal model as well as those under the skew-normal model. So, the score test may be used to test for normality against small departure from normality due to skewness. Otherwise, the likelihood ratio statistic LR should be used as it detects general departure from normality (due to both skewness and kurtosis) with, in general, largest power.  相似文献   

5.
We examine the asymptotic and small sample properties of model-based and robust tests of the null hypothesis of no randomized treatment effect based on the partial likelihood arising from an arbitrarily misspecified Cox proportional hazards model. When the distribution of the censoring variable is either conditionally independent of the treatment group given covariates or conditionally independent of covariates given the treatment group, the numerators of the partial likelihood treatment score and Wald tests have asymptotic mean equal to 0 under the null hypothesis, regardless of whether or how the Cox model is misspecified. We show that the model-based variance estimators used in the calculation of the model-based tests are not, in general, consistent under model misspecification, yet using analytic considerations and simulations we show that their true sizes can be as close to the nominal value as tests calculated with robust variance estimators. As a special case, we show that the model-based log-rank test is asymptotically valid. When the Cox model is misspecified and the distribution of censoring depends on both treatment group and covariates, the asymptotic distributions of the resulting partial likelihood treatment score statistic and maximum partial likelihood estimator do not, in general, have a zero mean under the null hypothesis. Here neither the fully model-based tests, including the log-rank test, nor the robust tests will be asymptotically valid, and we show through simulations that the distortion to test size can be substantial.  相似文献   

6.
This article presents methods for testing covariate effect in the Cox proportional hazards model based on Kullback–Leibler divergence and Renyi's information measure. Renyi's measure is referred to as the information divergence of order γ (γ ≠ 1) between two distributions. In the limiting case γ → 1, Renyi's measure becomes Kullback–Leibler divergence. In our case, the distributions correspond to the baseline and one possibly due to a covariate effect. Our proposed statistics are simple transformations of the parameter vector in the Cox proportional hazards model, and are compared with the Wald, likelihood ratio and score tests that are widely used in practice. Finally, the methods are illustrated using two real-life data sets.  相似文献   

7.
In event time data analysis, comparisons between distributions are made by the logrank test. When the data appear to contain crossing hazards phenomena, nonparametric weighted logrank statistics are usually suggested to accommodate different-weighted functions to increase the power. However, the gain in power by imposing different weights has its limits since differences before and after the crossing point may balance each other out. In contrast to the weighted logrank tests, we propose a score-type statistic based on the semiparametric-, heteroscedastic-hazards regression model of Hsieh [2001. On heteroscedastic hazards regression models: theory and application. J. Roy. Statist. Soc. Ser. B 63, 63–79.], by which the nonproportionality is explicitly modeled. Our score test is based on estimating functions derived from partial likelihood under the heteroscedastic model considered herein. Simulation results show the benefit of modeling the heteroscedasticity and power of the proposed test to two classes of weighted logrank tests (including Fleming–Harrington's test and Moreau's locally most powerful test), a Renyi-type test, and the Breslow's test for acceleration. We also demonstrate the application of this test by analyzing actual data in clinical trials.  相似文献   

8.
This article considers the different methods for determining sample sizes for Wald, likelihood ratio, and score tests for logistic regression. We review some recent methods, report the results of a simulation study comparing each of the methods for each of the three types of test, and provide Mathematica code for calculating sample size. We consider a variety of covariate distributions, and find that a calculation method based on a first order expansion of the likelihood ratio test statistic performs consistently well in achieving a target level of power for each of the three types of test.  相似文献   

9.
In many clinical studies where time to failure is of primary interest, patients may fail or die from one of many causes where failure time can be right censored. In some circumstances, it might also be the case that patients are known to die but the cause of death information is not available for some patients. Under the assumption that cause of death is missing at random, we compare the Goetghebeur and Ryan (1995, Biometrika, 82, 821–833) partial likelihood approach with the Dewanji (1992, Biometrika, 79, 855–857)partial likelihood approach. We show that the estimator for the regression coefficients based on the Dewanji partial likelihood is not only consistent and asymptotically normal, but also semiparametric efficient. While the Goetghebeur and Ryan estimator is more robust than the Dewanji partial likelihood estimator against misspecification of proportional baseline hazards, the Dewanji partial likelihood estimator allows the probability of missing cause of failure to depend on covariate information without the need to model the missingness mechanism. Tests for proportional baseline hazards are also suggested and a robust variance estimator is derived.  相似文献   

10.
In this paper, we investigate different procedures for testing the equality of two mean survival times in paired lifetime studies. We consider Owen’s M-test and Q-test, a likelihood ratio test, the paired t-test, the Wilcoxon signed rank test and a permutation test based on log-transformed survival times in the comparative study. We also consider the paired t-test, the Wilcoxon signed rank test and a permutation test based on original survival times for the sake of comparison. The size and power characteristics of these tests are studied by means of Monte Carlo simulations under a frailty Weibull model. For less skewed marginal distributions, the Wilcoxon signed rank test based on original survival times is found to be desirable. Otherwise, the M-test and the likelihood ratio test are the best choices in terms of power. In general, one can choose a test procedure based on information about the correlation between the two survival times and the skewness of the marginal survival distributions.  相似文献   

11.
Many procedures exist for testing equality of means or medians to compare several independent distributions. However, the mean or median do not determine the entire distribution. In this article, we propose a new small-sample modification of the likelihood ratio test for testing the equality of the quantiles of several normal distributions. The merits of the proposed test are numerically compared with the existing tests—a generalized p-value method and likelihood ratio test—with respect to their sizes and powers. The simulation results demonstrate that proposed method is satisfactory; its actual size is very close to the nominal level. We illustrate these approaches using two real examples.  相似文献   

12.
Score test of homogeneity for survival data   总被引:3,自引:0,他引:3  
If follow-up is made for subjects which are grouped into units, such as familial or spatial units then it may be interesting to test whether the groups are homogeneous (or independent for given explanatory variables). The effect of the groups is modelled as random and we consider a frailty proportional hazards model which allows to adjust for explanatory variables. We derive the score test of homogeneity from the marginal partial likelihood and it turns out to be the sum of a pairwise correlation term of martingale residuals and an overdispersion term. In the particular case where the sizes of the groups are equal to one, this statistic can be used for testing overdispersion. The asymptotic variance of this statistic is derived using counting process arguments. An extension to the case of several strata is given. The resulting test is computationally simple; its use is illustrated using both simulated and real data. In addition a decomposition of the score statistic is proposed as a sum of a pairwise correlation term and an overdispersion term. The pairwise correlation term can be used for constructing a statistic more robust to departure from the proportional hazard model, and the overdispesion term for constructing a test of fit of the proportional hazard model.  相似文献   

13.
Inferences for survival curves based on right censored continuous or grouped data are studied. Testing homogeneity with an ordered restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods are obtained by applying order restricted procedures to the estimates of the regression coefficients. Ordered analogues to the log rank test which are based on the score statistics are considered also. Chi-bar-squared distributions, which have been studied extensively, are shown to provide reasonable approximations to the null distributions of these tests statistics. Using Monte Carlo techniques, the powers of these two types of tests are compared with those that are available in the literature.  相似文献   

14.
Thispaper considers the stratified proportional hazards model witha focus on the assessment of stratum effects. The assessmentof such effects is often of interest, for example, in clinicaltrials. In this case, two relevant tests are the test of stratuminteraction with covariates and the test of stratum interactionwith baseline hazard functions. For the test of stratum interactionwith covariates, one can use the partial likelihood method (Kalbfleischand Prentice, 1980; Lin, 1994). For the test of stratum interactionwith baseline hazard functions, however, there seems to be noformal test available. We consider this problem and propose aclass of nonparametric tests. The asymptotic distributions ofthe tests are derived using the martingale theory. The proposedtests can also be used for survival comparisons which need tobe adjusted for covariate effects. The method is illustratedwith data from a lung cancer clinical trial.  相似文献   

15.
Cure rate models are survival models characterized by improper survivor distributions which occur when the cumulative distribution function, say F, of the survival times does not sum up to 1 (i.e. F(+∞)<1). The first objective of this paper is to provide a general approach to generate data from any improper distribution. An application to times to event data randomly drawn from improper distributions with proportional hazards is investigated using the semi-parametric proportional hazards model with cure obtained as a special case of the nonlinear transformation models in [Tsodikov, Semiparametric models: A generalized self-consistency approach, J. R. Stat. Soc. Ser. B 65 (2003), pp. 759–774]. The second objective of this paper is to show by simulations that the bias, the standard error and the mean square error of the maximum partial likelihood (PL) estimator of the hazard ratio as well as the statistical power based on the PL estimator strongly depend on the proportion of subjects in the whole population who will never experience the event of interest.  相似文献   

16.
A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. Its by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial.  相似文献   

17.
Summary.  The analysis of covariance is a technique that is used to improve the power of a k -sample test by adjusting for concomitant variables. If the end point is the time of survival, and some observations are right censored, the score statistic from the Cox proportional hazards model is the method that is most commonly used to test the equality of conditional hazard functions. In many situations, however, the proportional hazards model assumptions are not satisfied. Specifically, the relative risk function is not time invariant or represented as a log-linear function of the covariates. We propose an asymptotically valid k -sample test statistic to compare conditional hazard functions which does not require the assumption of proportional hazards, a parametric specification of the relative risk function or randomization of group assignment. Simulation results indicate that the performance of this statistic is satisfactory. The methodology is demonstrated on a data set in prostate cancer.  相似文献   

18.
Grønnesby and Borgan (1996, Lifetime Data Analysis 2, 315–328) propose an omnibus goodness-of-fit test for the Cox proportional hazards model. The test is based on grouping the subjects by their estimated risk score and comparing the number of observed and a model based estimated number of expected events within each group. We show, using extensive simulations, that even for moderate sample sizes the choice of number of groups is critical for the test to attain the specified size. In light of these results we suggest a grouping strategy under which the test attains the correct size even for small samples. The power of the test statistic seems to be acceptable when compared to other goodness-of-fit tests.  相似文献   

19.
Marshall–Olkin extended distributions offer a wider range of behaviour than the basic distributions from which they are derived and therefore may find applications in modeling lifetime data, especially within proportional odds models, and elsewhere. The present paper carries out a simulation study of likelihood ratio, Wald and score tests for the parameter that distinguishes the extended distribution from the basic one, for the Weibull and exponential cases, allowing for right censored data. The likelihood ratio test is found to perform better than the others. The test is shown to have sufficient power to detect alternatives that correspond to interesting departures from the basic model and can be useful in modeling.  相似文献   

20.
Sample size calculation is a critical issue in clinical trials because a small sample size leads to a biased inference and a large sample size increases the cost. With the development of advanced medical technology, some patients can be cured of certain chronic diseases, and the proportional hazards mixture cure model has been developed to handle survival data with potential cure information. Given the needs of survival trials with potential cure proportions, a corresponding sample size formula based on the log-rank test statistic for binary covariates has been proposed by Wang et al. [25]. However, a sample size formula based on continuous variables has not been developed. Herein, we presented sample size and power calculations for the mixture cure model with continuous variables based on the log-rank method and further modified it by Ewell's method. The proposed approaches were evaluated using simulation studies for synthetic data from exponential and Weibull distributions. A program for calculating necessary sample size for continuous covariates in a mixture cure model was implemented in R.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号