首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

There have been many advances in statistical methodology for the analysis of recurrent event data in recent years. Multiplicative semiparametric rate-based models are widely used in clinical trials, as are more general partially conditional rate-based models involving event-based stratification. The partially conditional model provides protection against extra-Poisson variation as well as event-dependent censoring, but conditioning on outcomes post-randomization can induce confounding and compromise causal inference. The purpose of this article is to examine the consequences of model misspecification in semiparametric marginal and partially conditional rate-based analysis through omission of prognostic variables. We do so using estimating function theory and empirical studies.

  相似文献   

2.
We propose a profile conditional likelihood approach to handle missing covariates in the general semiparametric transformation regression model. The method estimates the marginal survival function by the Kaplan-Meier estimator, and then estimates the parameters of the survival model and the covariate distribution from a conditional likelihood, substituting the Kaplan-Meier estimator for the marginal survival function in the conditional likelihood. This method is simpler than full maximum likelihood approaches, and yields consistent and asymptotically normally distributed estimator of the regression parameter when censoring is independent of the covariates. The estimator demonstrates very high relative efficiency in simulations. When compared with complete-case analysis, the proposed estimator can be more efficient when the missing data are missing completely at random and can correct bias when the missing data are missing at random. The potential application of the proposed method to the generalized probit model with missing continuous covariates is also outlined.  相似文献   

3.

Pairwise likelihood is a limited information estimation method that has also been used for estimating the parameters of latent variable and structural equation models. Pairwise likelihood is a special case of composite likelihood methods that uses lower-order conditional or marginal log-likelihoods instead of the full log-likelihood. The composite likelihood to be maximized is a weighted sum of marginal or conditional log-likelihoods. Weighting has been proposed for increasing efficiency, but the choice of weights is not straightforward in most applications. Furthermore, the importance of leaving out higher-order scores to avoid duplicating lower-order marginal information has been pointed out. In this paper, we approach the problem of weighting from a sampling perspective. More specifically, we propose a sampling method for selecting pairs based on their contribution to the total variance from all pairs. The sampling approach does not aim to increase efficiency but to decrease the estimation time, especially in models with a large number of observed categorical variables. We demonstrate the performance of the proposed methodology using simulated examples and a real application.

  相似文献   

4.
A major objective in many clinical trials is to compare several competing treatments in a randomized experiment. In such studies, it is often necessary to adjust for some other important factor that affects the event rates in the treatment groups. When this factor is discrete, one usual approach uses a stratified version of the logrank test. In this article, we consider the problem that arises when the factor giving rise to the strata is missing at random for some of the study subjects. This article proposes a modified version of the stratified logrank test, in which the unobserved stratum indicators are replaced by an estimate of their conditional expectation given available auxiliary covariate measurements. The null asymptotic distribution of the proposed test statistic is investigated. Simulation experiments are also conducted to examine the finite-sample behavior of this test under both null and alternative hypotheses. Simulations indicate that the proposed test performs well, even under some moderate deviations to the at-random missingness assumption.  相似文献   

5.
Summary. Consider a pair of random variables, both subject to random right censoring. New estimators for the bivariate and marginal distributions of these variables are proposed. The estimators of the marginal distributions are not the marginals of the corresponding estimator of the bivariate distribution. Both estimators require estimation of the conditional distribution when the conditioning variable is subject to censoring. Such a method of estimation is proposed. The weak convergence of the estimators proposed is obtained. A small simulation study suggests that the estimators of the marginal and bivariate distributions perform well relatively to respectively the Kaplan–Meier estimator for the marginal distribution and the estimators of Pruitt and van der Laan for the bivariate distribution. The use of the estimators in practice is illustrated by the analysis of a data set.  相似文献   

6.
Stratified randomization based on the baseline value of the primary analysis variable is common in clinical trial design. We illustrate from a theoretical viewpoint the advantage of such a stratified randomization to achieve balance of the baseline covariate. We also conclude that the estimator for the treatment effect is consistent when including both the continuous baseline covariate and the stratification factor derived from the baseline covariate. In addition, the analysis of covariance model including both the continuous covariate and the stratification factor is asymptotically no less efficient than including either only the continuous baseline value or only the stratification factor. We recommend that the continuous baseline covariate should generally be included in the analysis model. The corresponding stratification factor may also be included in the analysis model if one is not confident that the relationship between the baseline covariate and the response variable is linear. In spite of the above recommendation, one should always carefully examine relevant historical data to pre-specify the most appropriate analysis model for a perspective study.  相似文献   

7.
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity‐score‐matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity‐score‐matched data, using a robust estimator of the variance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

9.
In longitudinal clinical trials, when outcome variables at later time points are only defined for patients who survive to those times, the evaluation of the causal effect of treatment is complicated. In this paper, we describe an approach that can be used to obtain the causal effect of three treatment arms with ordinal outcomes in the presence of death using a principal stratification approach. We introduce a set of flexible assumptions to identify the causal effect and implement a sensitivity analysis for non-identifiable assumptions which we parameterize parsimoniously. Methods are illustrated on quality of life data from a recent colorectal cancer clinical trial.  相似文献   

10.
Summary.  Sparse clustered data arise in finely stratified genetic and epidemiologic studies and pose at least two challenges to inference. First, it is difficult to model and interpret the full joint probability of dependent discrete data, which limits the utility of full likelihood methods. Second, standard methods for clustered data, such as pairwise likelihood and the generalized estimating function approach, are unsuitable when the data are sparse owing to the presence of many nuisance parameters. We present a composite conditional likelihood for use with sparse clustered data that provides valid inferences about covariate effects on both the marginal response probabilities and the intracluster pairwise association. Our primary focus is on sparse clustered binary data, in which case the method proposed utilizes doubly discordant quadruplets drawn from each stratum to conduct inference about the intracluster pairwise odds ratios.  相似文献   

11.
In this work, a generalization of the Goodman Association Model to the case of q, q > 2, categorical variables which is based on the idea of marginal modelling discussed by Gloneck–McCullagh is introduced; the difference between the proposed generalization and two models, previously introduced by Becker and Colombi, is discussed. The Becker generalization is not a marginal model because it does not imply Logit Models for the marginal probabilities, and because it is based on the conditional approach of modelling the association. The Colombi model is only partially a marginal model because it uses simple logit models for the univariate marginal probabilities but is based on the conditional approach of modelling the association. It is also shown that the maximum likelihood estimation of the parameters of the new model is feasible and, to compute the maximum likelihood estimates, an algorithm is proposed, which is a numerically convenient compromise between the constrained optimization approach of Lang and the straightforward use of the Fisher Scoring Algorithm suggested by Glonek–McCullagh.Finally, the proposed model is used to analyze a data set concerning work accidents which occurred to workers at some Italian firms during the years 1994–1996.  相似文献   

12.

We present a new estimator of the restricted mean survival time in randomized trials where there is right censoring that may depend on treatment and baseline variables. The proposed estimator leverages prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. Under regularity conditions and random censoring within strata of treatment and baseline variables, the proposed estimator has the following features: (i) it is interpretable under violations of the proportional hazards assumption; (ii) it is consistent and at least as precise as the Kaplan–Meier and inverse probability weighted estimators, under identifiability conditions; (iii) it remains consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) it achieves the nonparametric efficiency bound when both of these distributions are consistently estimated. We illustrate the performance of our method using simulations based on resampling data from a completed, phase 3 randomized clinical trial of a new surgical treatment for stroke; the proposed estimator achieves a 12% gain in relative efficiency compared to the Kaplan–Meier estimator. The proposed estimator has potential advantages over existing approaches for randomized trials with time-to-event outcomes, since existing methods either rely on model assumptions that are untenable in many applications, or lack some of the efficiency and consistency properties (i)–(iv). We focus on estimation of the restricted mean survival time, but our methods may be adapted to estimate any treatment effect measure defined as a smooth contrast between the survival curves for each study arm. We provide R code to implement the estimator.

  相似文献   

13.
Confirmatory randomized clinical trials with a stratified design may have ordinal response outcomes, ie, either ordered categories or continuous determinations that are not compatible with an interval scale. Also, multiple endpoints are often collected when 1 single endpoint does not represent the overall efficacy of the treatment. In addition, random baseline imbalances and missing values can add another layer of difficulty in the analysis plan. Therefore, the development of an approach that provides a consolidated strategy to all issues collectively is essential. For a real case example that is from a clinical trial comparing a test treatment and a control for the pain management for patients with osteoarthritis, which has all aforementioned issues, multivariate Mann‐Whitney estimators with stratification adjustment are applicable to the strictly ordinal responses with stratified design. Randomization based nonparametric analysis of covariance is applied to account for the possible baseline imbalances. Several approaches that handle missing values are provided. A global test followed by a closed testing procedure controls the family wise error rate in the strong sense for the analysis of multiple endpoints. Four outcomes indicating joint pain, stiffness, and functional status were analyzed collectively and also individually through the procedures. Treatment efficacy was observed in the combined endpoint as well as in the individual endpoints. The proposed approach is effective in addressing the aforementioned problems simultaneously and straightforward to implement.  相似文献   

14.
Dependence in outcome variables may pose formidable difficulty in analyzing data in longitudinal studies. In the past, most of the studies made attempts to address this problem using the marginal models. However, using the marginal models alone, it is difficult to specify the measures of dependence in outcomes due to association between outcomes as well as between outcomes and explanatory variables. In this paper, a generalized approach is demonstrated using both the conditional and marginal models. This model uses link functions to test for dependence in outcome variables. The estimation and test procedures are illustrated with an application to the mobility index data from the Health and Retirement Survey and also simulations are performed for correlated binary data generated from the bivariate Bernoulli distributions. The results indicate the usefulness of the proposed method.  相似文献   

15.
Chronic disease processes often feature transient recurrent adverse clinical events. Treatment comparisons in clinical trials of such disorders must be based on valid and efficient methods of analysis. We discuss robust strategies for testing treatment effects with recurrent events using methods based on marginal rate functions, partially conditional rate functions, and methods based on marginal failure time models. While all three approaches lead to valid tests of the null hypothesis when robust variance estimates are used, they differ in power. Moreover, some approaches lead to estimators of treatment effect which are more easily interpreted than others. To investigate this, we derive the limiting value of estimators of treatment effect from marginal failure time models and illustrate their dependence on features of the underlying point process, as well as the censoring mechanism. Through simulation, we show that methods based on marginal failure time distributions are shown to be sensitive to treatment effects delaying the occurrence of the very first recurrences. Methods based on marginal or partially conditional rate functions perform well in situations where treatment effects persist or in settings where the aim is to summarizee long-term data on efficacy.  相似文献   

16.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

17.
The authors develop a Markov model for the analysis of longitudinal categorical data which facilitates modelling both marginal and conditional structures. A likelihood formulation is employed for inference, so the resulting estimators enjoy the optimal properties such as efficiency and consistency, and remain consistent when data are missing at random. Simulation studies demonstrate that the proposed method performs well under a variety of situations. Application to data from a smoking prevention study illustrates the utility of the model and interpretation of covariate effects. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

18.
In this article, we propose a novel algorithm for sequential design of metamodels in random simulation, which combines the exploration capability of most one-shot space-filling designs with the exploitation feature of common sequential designs. The algorithm continuously maintains a balance between the exploration and the exploitation search throughout the search process in a sequential and adaptive manner. The numerical results indicate that the proposed approach is superior to one of the existing well-known sequential designs in terms of both the computational efficiency and speed in generating efficient experimental designs.  相似文献   

19.
Non-proportional hazards (NPH) have been observed in many immuno-oncology clinical trials. Weighted log-rank tests (WLRT) with suitable weights can be used to improve the power of detecting the difference between survival curves in the presence of NPH. However, it is not easy to choose a proper WLRT in practice. A versatile max-combo test was proposed to achieve the balance of robustness and efficiency, and has received increasing attention recently. Survival trials often warrant interim analyses due to their high cost and long durations. The integration and implementation of max-combo tests in interim analyses often require extensive simulation studies. In this report, we propose a simulation-free approach for group sequential designs with the max-combo test in survival trials. The simulation results support that the proposed method can successfully control the type I error rate and offer excellent accuracy and flexibility in estimating sample sizes, with light computation burden. Notably, our method displays strong robustness towards various model misspecifications and has been implemented in an R package.  相似文献   

20.
Two‐stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean‐adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean‐squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号