共查询到20条相似文献,搜索用时 31 毫秒
1.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially. 相似文献
2.
Takahiro Hasegawa Saori Misawa Shintaro Nakagawa Shinichi Tanaka Takanori Tanase Hiroyuki Ugai Akira Wakana Yasuhide Yodo Satoru Tsuchiya Hideki Suganami 《Pharmaceutical statistics》2020,19(4):436-453
Many clinical research studies evaluate a time‐to‐event outcome, illustrate survival functions, and conventionally report estimated hazard ratios to express the magnitude of the treatment effect when comparing between groups. However, it may not be straightforward to interpret the hazard ratio clinically and statistically when the proportional hazards assumption is invalid. In some recent papers published in clinical journals, the use of restricted mean survival time (RMST) or τ ‐year mean survival time is discussed as one of the alternative summary measures for the time‐to‐event outcome. The RMST is defined as the expected value of time to event limited to a specific time point corresponding to the area under the survival curve up to the specific time point. This article summarizes the necessary information to conduct statistical analysis using the RMST, including the definition and statistical properties of the RMST, adjusted analysis methods, sample size calculation, information fraction for the RMST difference, and clinical and statistical meaning and interpretation. Additionally, we discuss how to set the specific time point to define the RMST from two main points of view. We also provide developed SAS codes to determine the sample size required to detect an expected RMST difference with appropriate power and reconstruct individual survival data to estimate an RMST reference value from a reported survival curve. 相似文献
3.
Comparison of the restricted mean survival time with the hazard ratio in superiority trials with a time‐to‐event end point 下载免费PDF全文
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption. 相似文献
4.
The present work demonstrates an application of random effects model for analyzing birth intervals that are clustered into geographical regions. Observations from the same cluster are assumed to be correlated because usually they share certain unobserved characteristics between them. Ignoring the correlations among the observations may lead to incorrect standard errors of the estimates of parameters of interest. Beside making the comparisons between Cox's proportional hazards model and random effects model for analyzing geographically clustered time-to-event data, important demographic and socioeconomic factors that may affect the length of birth intervals of Bangladeshi women are also reported in this paper. 相似文献
5.
6.
The increase in the variance of the estimate of treatment effect which results from omitting a dichotomous or continuous covariate is quantified as a function of censoring. The efficiency of not adjusting for a covariate is measured by the ratio of the variance obtained with and without adjustment for the covariate. The variance is derived using the Weibull proportional hazards model. Under random censoring, the efficiency of not adjusting for a continuous covariate is an increasing function of the percentage of censored observations. 相似文献
7.
Scharfstein Daniel O. Tsiatis Anastasios A. Gilbert Peter B. 《Lifetime data analysis》1998,4(4):355-391
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound. 相似文献
8.
ABSTRACT Cox proportional hazards regression model has been widely used to estimate the effect of a prognostic factor on a time-to-event outcome. In a survey of survival analyses in cancer journals, it was found that only 5% of studies using Cox proportional hazards model attempted to verify the underlying assumption. Usually an estimate of the treatment effect from fitting a Cox model was reported without validation of the proportionality assumption. It is not clear how such an estimate should be interpreted if the proportionality assumption is violated. In this article, we show that the estimate of treatment effect from a Cox regression model can be interpreted as a weighted average of the log-scaled hazard ratio over the duration of study. A hypothetic example is used to explain the weights. 相似文献
9.
Weighted log‐rank test for time‐to‐event data in immunotherapy trials with random delayed treatment effect and cure rate 下载免费PDF全文
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test. 相似文献
10.
We consider the problem of hypothesis-testing under a logistic model with two dichotomous independent variables. In particular, we consider the case in which the coefficients β1, and β2 of these variables are known on an a priori basis to not be of opposite sign. For this situation we show that there exists a simple nonparametric altenative to the likelihood ratio test for testing H0: β1 = β2 = 0 VS.H1 at least one β1 = 0. We find the asympotic relative efficiency of this test and show that it exceeds 0.90 under a wide range of conditions. We also given an example. 相似文献
11.
Pang Du 《Lifetime data analysis》2009,15(2):256-277
Recurrent event data arise in many biomedical and engineering studies when failure events can occur repeatedly over time for each study subject. In this article, we are interested in nonparametric estimation of the hazard function for gap time. A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate. Method for smoothing parameter selection is developed from subject-wise cross-validation. Confidence intervals for the hazard function are derived using the Bayes model of the penalized likelihood. An eigenvalue analysis establishes the asymptotic convergence rates of the relevant estimates. Empirical studies are performed to evaluate various aspects of the method. The proposed technique is demonstrated through an application to the well-known bladder tumor cancer data. 相似文献
12.
In the software testing process, the nature of the failure data is affected by many factors, such as the testing environment, testing strategy, and resource allocation. These factors are unlikely to all be kept stable during the entire process of software testing. As a result, the statistical structure of the failure data is likely to experience major changes. Recently, some useful non homogeneous Poisson process (NHPP) models with change-point are proposed. However, in many realistic situations, whether a change-point exists is unknown. Furthermore, some real data seem to have two or more change-points. In this article we propose test statistics to test the existence of change-point(s). The experimental results of real data show that our tests perform well. 相似文献
13.
In this paper, we investigate the testing for serial correlation in a linear model with validation data, then we apply the empirical likelihood method to construct the test statistic and derive the asymptotic distribution of the test statistic under null hypothesis. Simulation results show that our method performs well both in size and power with finite same size. 相似文献
14.
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In this paper, we consider the Clayton–Oakes model with marginal proportional hazards and use the full model structure to improve on efficiency compared with the independence analysis. We derive a likelihood based estimating equation for the regression parameters as well as for the correlation parameter of the model. We give the large sample properties of the estimators arising from this estimating equation. Finally, we investigate the small sample properties of the estimators through Monte Carlo simulations. 相似文献
15.
Husam Awni Bayoud 《Journal of applied statistics》2016,43(7):1322-1334
The problem of testing the similarity of two normal populations is reconsidered, in this article, from a nonclassical point of view. We introduce a test statistic based on the maximum likelihood estimate of Weitzman's overlapping coefficient. Simulated critical points are provided for the proposed test for various sample sizes and significance levels. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing tests. Furthermore, Type-I error robustness of the proposed and the existing tests are studied via simulation studies when the underlying distributions are non-normal. Two data sets are analyzed for illustration purposes. Finally, the proposed test has been implemented to assess the bioequivalence of two drug formulations. 相似文献
16.
This paper establishes a nonparametric estimator for the treatment effect on censored bivariate data under unvariate censoring. This proposed estimator is based on the one from Lin and Ying(1993)'s nonparametric bivariate survival function estimator, which is itself a generalized version of Park and Park(1995)' quantile estimator. A Bahadur type representation of quantile functions were obtained from the marginal survival distribution estimator of Lin and Ying' model. The asymptotic property of this estimator is shown below and the simulation studies are also given 相似文献
17.
《Journal of Statistical Computation and Simulation》2012,82(9):1686-1696
Paired binary data arise naturally when paired body parts are investigated in clinical trials. One of the widely used models for dealing with this kind of data is the equal correlation coefficients model. Before using this model, it is necessary to test whether the correlation coefficients in each group are actually equal. In this paper, three test statistics (likelihood ratio test, Wald-type test, and Score test) are derived for this purpose. The simulation results show that the Score test statistic maintains type I error rate and has satisfactory power, and therefore is recommended among the three methods. The likelihood ratio test is over conservative in most cases, and the Wald-type statistic is not robust with respect to empirical type I error. Three real examples, including a multi-centre Phase II double-blind placebo randomized controlled trial, are given to illustrate the three proposed test statistics. 相似文献
18.
In biomedical studies, interest often focuses on the relationship between patients characteristics or some risk factors and both quality of life and survival time of subjects under study. In this paper, we propose a simultaneous modelling of both quality of life and survival time using the observed covariates. Moreover, random effects are introduced into the simultaneous models to account for dependence between quality of life and survival time due to unobserved factors. EM algorithms are used to derive the point estimates for the parameters in the proposed model and profile likelihood function is used to estimate their variances. The asymptotic properties are established for our proposed estimators. Finally, simulation studies are conducted to examine the finite-sample properties of the proposed estimators and a liver transplantation data set is analyzed to illustrate our approaches. 相似文献
19.
In this study we propose a unified semiparametric approach to estimate various indices of treatment effect under the density ratio model, which connects two density functions by an exponential tilt. For each index, we construct two estimating functions based on the model and apply the generalized method of moments to improve the estimates. The estimating functions are allowed to be non smooth with respect to parameters and hence make the proposed method more flexible. We establish the asymptotic properties of the proposed estimators and illustrate the application with several simulations and two real data sets. 相似文献
20.
Kang FangYuan 《统计学通讯:理论与方法》2018,47(8):1901-1912
In this article, an additive rate model is proposed for clustered recurrent event with a terminal event. The subjects are clustered by some property. For the clustered subjects, the recurrent event is precluded by the death. An estimating equation is developed for the model parameter and the baseline rate function. The asymptotic properties of the resulting estimators are established. In addition, a goodness-of-fit test is presented to assess the adequacy of the model. The finite-sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a bladder cancer data is illustrated. 相似文献