首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

2.
When differences of survival functions are located in early time, a Wilcoxon test is the best test, but when differences of survival functions are located in late time, using a log-rank test is better. Therefore, a researcher needs a stable test in these situations. In this paper, a new two-sample test is proposed and considered. This test is distribution-free. This test is useful for choosing between log-rank and Wilcoxon tests. Its power is roughly the maximal power of the log-rank test and Wilcoxon test.  相似文献   

3.
This article focuses on the estimation of percentile residual life function with left-truncated and right-censored data. Asymptotic normality and a pointwise confidence interval that does not require estimating the unknown underlying distribution function of the proposed empirical estimator are obtained. Some simulation studies and a real data example are used to illustrate our results.  相似文献   

4.
Length-biased data, which are often encountered in engineering, economics and epidemiology studies, are generally subject to right censoring caused by the research ending or the follow-up loss. The structure of length-biased data is distinct from conventional survival data, since the independent censoring assumption is often violated due to the biased sampling. In this paper, a proportional hazard model with varying coefficients is considered for the length-biased and right-censored data. A local composite likelihood procedure is put forward for the estimation of unknown coefficient functions in the model, and large sample properties of the proposed estimators are also obtained. Additionally, an extensive simulation studies are conducted to assess the finite sample performance of the proposed method and a data set from the Academy Awards is analyzed.  相似文献   

5.
A new test of the proportional hazards assumption in the Cox model is proposed. The idea is based on Neyman’s smooth tests. The Cox model with proportional hazards (i.e. time-constant covariate effects) is embedded in a model with a smoothly time-varying covariate effect that is expressed as a combination of some basis functions (e.g., Legendre polynomials, cosines). Then the smooth test is the score test for significance of these artificial covariates. Furthermore, we apply a modification of Schwarz’s selection rule to choosing the dimension of the smooth model (the number of the basis functions). The score test is then used in the selected model. In a simulation study, we compare the proposed tests with standard tests based on the score process.  相似文献   

6.
This paper considers the estimation of the regression coefficients in the Cox proportional hazards model with left-truncated and interval-censored data. Using the approaches of Pan [A multiple imputation approach to Cox regression with interval-censored data, Biometrics 56 (2000), pp. 199–203] and Heller [Proportional hazards regression with interval censored data using an inverse probability weight, Lifetime Data Anal. 17 (2011), pp. 373–385], we propose two estimates of the regression coefficients. The first estimate is based on a multiple imputation methodology. The second estimate uses an inverse probability weight to select event time pairs where the ordering is unambiguous. A simulation study is conducted to investigate the performance of the proposed estimators. The proposed methods are illustrated using the Centers for Disease Control and Prevention (CDC) acquired immunodeficiency syndrome (AIDS) Blood Transfusion Data.  相似文献   

7.
This paper considers two-sample nonparametric comparison of survival function when data are subject to left truncation and interval censoring. We propose a class of rank-based tests, which are generalization of weighted log-rank tests for right-censored data. Simulation studies indicate that the proposed tests are appropriate for practical use.  相似文献   

8.
The proportional hazards regression model of Cox(1972) is widely used in analyzing survival data. We examine several goodness of fit tests for checking the proportionality of hazards in the Cox model with two-sample censored data, and compare the performance of these tests by a simulation study. The strengths and weaknesses of the tests are pointed out. The effects of the extent of random censoring on the size and power are also examined. Results of a simulation study demonstrate that Gill and Schumacher's test is most powerful against a broad range of monotone departures from the proportional hazards assumption, but it may not perform as well fail for alternatives of nonmonotone hazard ratio. For the latter kind of alternatives, Andersen's test may detect patterns of irregular changes in hazards.  相似文献   

9.
To estimate model parameters from complex sample data. we apply maximum likelihood techniques to the complex sample data from the finite population, which is treated as a sample from an i nfinite superpopulation. General asymptotic distribution theory is developed and then applied to both logistic regression and discrete proportional hazards models. Data from the Lipid Research Clinics Program areused to illustrate each model, demonstrating the effects on inference of neglecting the sampling design during parameter estimation. These empirical results also shed light on the issue of model-based vs. design-based inferences.  相似文献   

10.
The use of the Cox proportional hazards regression model is wide-spread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.  相似文献   

11.
In longitudinal studies, the proportional hazard model is often used to analyse covariate effects on the duration time, defined as the elapsed time between the first and second event. In this article, we consider the situation when the first event suffers partly interval-censoring and the second event suffers left-truncation and right-censoring. We proposed a two-step estimation procedure for estimating the regression coefficients of the proportional model. A simulation study is conducted to investigate the performance of the proposed estimator.  相似文献   

12.
A popular choice when analyzing ordinal data is to consider the cumulative proportional odds model to relate the marginal probabilities of the ordinal outcome to a set of covariates. However, application of this model relies on the condition of identical cumulative odds ratios across the cut-offs of the ordinal outcome; the well-known proportional odds assumption. This paper focuses on the assessment of this assumption while accounting for repeated and missing data. In this respect, we develop a statistical method built on multiple imputation (MI) based on generalized estimating equations that allows to test the proportionality assumption under the missing at random setting. The performance of the proposed method is evaluated for two MI algorithms for incomplete longitudinal ordinal data. The impact of both MI methods is compared with respect to the type I error rate and the power for situations covering various numbers of categories of the ordinal outcome, sample sizes, rates of missingness, well-balanced and skewed data. The comparison of both MI methods with the complete-case analysis is also provided. We illustrate the use of the proposed methods on a quality of life data from a cancer clinical trial.  相似文献   

13.
For a censored two-sample problem, Chen and Wang [Y.Q. Chen and M.-C. Wang, Analysis of accelerated hazards models, J. Am. Statist. Assoc. 95 (2000), pp. 608–618] introduced the accelerated hazards model. The scale-change parameter in this model characterizes the association of two groups. However, its estimator involves the unknown density in the asymptotic variance. Thus, to make an inference on the parameter, numerically intensive methods are needed. The goal of this article is to propose a simple estimation method in which estimators are asymptotically normal with a density-free asymptotic variance. Some lack-of-fit tests are also obtained from this. These tests are related to Gill–Schumacher type tests [R.D. Gill and M. Schumacher, A simple test of the proportional hazards assumption, Biometrika 74 (1987), pp. 289–300] in which the estimating functions are evaluated at two different weight functions yielding two estimators that are close to each other. Numerical studies show that for some weight functions, the estimators and tests perform well. The proposed procedures are illustrated in two applications.  相似文献   

14.
The standardized hazard ratio for univariate proportional hazards regression is generalized as a scalar to multivariate proportional hazards regression. Estimators of the standardized log hazard ratio are developed, with corrections for bias and for regression to the mean in high-dimensional analyses. Tests of point and interval null hypotheses and confidence intervals are constructed. Cohort sampling study designs, commonly used in prospective–retrospective clinical genomic studies, are accommodated.  相似文献   

15.
Several omnibus tests of the proportional hazards assumption have been proposed in the literature. In the two-sample case, tests have also been developed against ordered alternatives like monotone hazard ratio and monotone ratio of cumulative hazards. Here we propose a natural extension of these partial orders to the case of continuous and potentially time varying covariates, and develop tests for the proportional hazards assumption against such ordered alternatives. The work is motivated by applications in biomedicine and economics where covariate effects often decay over lifetime. The proposed tests do not make restrictive assumptions on the underlying regression model, and are applicable in the presence of time varying covariates, multiple covariates and frailty. Small sample performance and an application to real data highlight the use of the framework and methodology to identify and model the nature of departures from proportionality.  相似文献   

16.
In this article, we propose a non-parametric quantile inference procedure for cause-specific failure probabilities to estimate the lifetime distribution of length-biased and right-censored data with competing risks. We also derive the asymptotic properties of the proposed estimators of the quantile function. Furthermore, the results are used to construct confidence intervals and bands for the quantile function. Simulation studies are conducted to illustrate the method and theory, and an application to an unemployment data is presented.  相似文献   

17.
ABSTRACT

Competing risks data are common in medical research in which lifetime of individuals can be classified in terms of causes of failure. In survival or reliability studies, it is common that the patients (objects) are subjected to both left censoring and right censoring, which is refereed as double censoring. The analysis of doubly censored competing risks data in presence of covariates is the objective of this study. We propose a proportional hazards model for the analysis of doubly censored competing risks data, using the hazard rate functions of Gray (1988 Gray, R.J. (1988). A class of k-sample tests for comparing the cumulative incidence of a competing risk. Ann. Statist. 16:11411154.[Crossref], [Web of Science ®] [Google Scholar]), while focusing upon one major cause of failure. We derive estimators for regression parameter vector and cumulative baseline cause specific hazard rate function. Asymptotic properties of the estimators are discussed. A simulation study is conducted to assess the finite sample behavior of the proposed estimators. We illustrate the method using a real life doubly censored competing risks data.  相似文献   

18.
Quantile regression methods have been used to estimate upper and lower quantile reference curves as the function of several covariates. In this article, it is demonstrated that the estimating equation of Zhou [A weighted quantile regression for randomly truncated data, Comput. Stat. Data Anal. 55 (2011), pp. 554–566.] can be extended to analyse left-truncated and right-censored data. We evaluate the finite sample performance of the proposed estimators through simulation studies. The proposed estimator β?(q) is applied to the Veteran's Administration lung cancer data reported by Prentice [Exponential survival with censoring and explanatory variables, Biometrika 60 (1973), pp. 279–288].  相似文献   

19.
For testing the equality of two survival functions, the weighted logrank test and the weighted Kaplan–Meier test are the two most widely used methods. Actually, each of these tests has advantages and defects against various alternatives, while we cannot specify in advance the possible types of the survival differences. Hence, how to choose a single test or combine a number of competitive tests for indicating the diversities of two survival functions without suffering a substantial loss in power is an important issue. Instead of directly using a particular test which generally performs well in some situations and poorly in others, we further consider a class of tests indexed by a weighted parameter for testing the equality of two survival functions in this paper. A delete-1 jackknife method is implemented for selecting weights such that the variance of the test is minimized. Some numerical experiments are performed under various alternatives for illustrating the superiority of the proposed method. Finally, the proposed testing procedure is applied to two real-data examples as well.  相似文献   

20.
Abstract

ROC curve is a fundamental evaluation tool in medical researches and survival analysis. The estimation of ROC curve has been studied extensively with complete data and right-censored survival data. However, these methods are not suitable to analyze the length-biased and right-censored data. Since this kind of data includes the auxiliary information that truncation time and residual time share the same distribution, the two new estimators for the ROC curve are proposed by taking into account this auxiliary information to improve estimation efficiency. Numerical simulation studies with different assumed cases and real data analysis are conducted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号