首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

We consider the problem of assessing the effects of a treatment on duration outcomes using data from a randomized evaluation with noncompliance. For such settings, we derive nonparametric sharp bounds for average and quantile treatment effects addressing three pervasive problems simultaneously: self-selection into the spell of interest, endogenous censoring of the duration outcome, and noncompliance with the assigned treatment. Ignoring any of these issues could yield biased estimates of the effects. Notably, the proposed bounds do not impose the independent censoring assumption—which is commonly used to address censoring but is likely to fail in important settings—or exclusion restrictions to address endogeneity of censoring and selection. Instead, they employ monotonicity and stochastic dominance assumptions. To illustrate the use of these bounds we assess the effects of the Job Corps (JC) training program on its participants’ last complete employment spell duration. Our estimated bounds suggest that JC participation may increase the average duration of the last complete employment spell before week 208 after randomization by at least 5.6 log points (5.8%) for individuals who comply with their treatment assignment and experience a complete employment spell whether or not they enrolled in JC. The estimated quantile treatment effects suggest the impacts may be heterogeneous, and strengthen our conclusions based on the estimated average effects.  相似文献   

2.
The problem of bandwidth selection for kernel-based estimation of the distribution function (cdf) at a given point is considered. With appropriate bandwidth, a kernel-based estimator (kdf) is known to outperform the empirical distribution function. However, such a bandwidth is unknown in practice. In pointwise estimation, the appropriate bandwidth depends on the point where the function is estimated. The existing smoothing methods use one common bandwidth to estimate the cdf. The accuracy of the resulting estimates varies substantially depending on the cdf and the point where it is estimated. We propose to select bandwidth by minimizing a bootstrap estimator of the MSE of the kdf. The resulting estimator performs reliably, irrespective of where the cdf is estimated. It is shown to be consistent under i.i.d. as well as strongly mixing dependence assumption. Two applications of the proposed estimator are shown in finance and seismology. We report a dataset on the S & P Nifty index values.  相似文献   

3.
In many areas of application, especially life testing and reliability, it is often of interest to estimate an unknown cumulative distribution (cdf). A simultaneous confidence band (SCB) of the cdf can be used to assess the statistical uncertainty of the estimated cdf over the entire range of the distribution. Cheng and Iles [1983. Confidence bands for cumulative distribution functions of continuous random variables. Technometrics 25 (1), 77–86] presented an approach to construct an SCB for the cdf of a continuous random variable. For the log-location-scale family of distributions, they gave explicit forms for the upper and lower boundaries of the SCB based on expected information. In this article, we extend the work of Cheng and Iles [1983. Confidence bands for cumulative distribution functions of continuous random variables. Technometrics 25 (1), 77–86] in several directions. We study the SCBs based on local information, expected information, and estimated expected information for both the “cdf method” and the “quantile method.” We also study the effects of exceptional cases where a simple SCB does not exist. We describe calibration of the bands to provide exact coverage for complete data and type II censoring and better approximate coverage for other kinds of censoring. We also discuss how to extend these procedures to regression analysis.  相似文献   

4.
The case-cohort study design is widely used to reduce cost when collecting expensive covariates in large cohort studies with survival or competing risks outcomes. A case-cohort study dataset consists of two parts: (a) a random sample and (b) all cases or failures from a specific cause of interest. Clinicians often assess covariate effects on competing risks outcomes. The proportional subdistribution hazards model directly evaluates the effect of a covariate on the cumulative incidence function under the non-covariate-dependent censoring assumption for the full cohort study. However, the non-covariate-dependent censoring assumption is often violated in many biomedical studies. In this article, we propose a proportional subdistribution hazards model for case-cohort studies with stratified data with covariate-adjusted censoring weight. We further propose an efficient estimator when extra information from the other causes is available under case-cohort studies. The proposed estimators are shown to be consistent and asymptotically normal. Simulation studies show (a) the proposed estimator is unbiased when the censoring distribution depends on covariates and (b) the proposed efficient estimator gains estimation efficiency when using extra information from the other causes. We analyze a bone marrow transplant dataset and a coronary heart disease dataset using the proposed method.  相似文献   

5.
This article studies a general joint model for longitudinal measurements and competing risks survival data. The model consists of a linear mixed effects sub-model for the longitudinal outcome, a proportional cause-specific hazards frailty sub-model for the competing risks survival data, and a regression sub-model for the variance–covariance matrix of the multivariate latent random effects based on a modified Cholesky decomposition. The model provides a useful approach to adjust for non-ignorable missing data due to dropout for the longitudinal outcome, enables analysis of the survival outcome with informative censoring and intermittently measured time-dependent covariates, as well as joint analysis of the longitudinal and survival outcomes. Unlike previously studied joint models, our model allows for heterogeneous random covariance matrices. It also offers a framework to assess the homogeneous covariance assumption of existing joint models. A Bayesian MCMC procedure is developed for parameter estimation and inference. Its performances and frequentist properties are investigated using simulations. A real data example is used to illustrate the usefulness of the approach.  相似文献   

6.
7.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption, the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable. Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for inpatient cares over time.  相似文献   

8.

We present a new estimator of the restricted mean survival time in randomized trials where there is right censoring that may depend on treatment and baseline variables. The proposed estimator leverages prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. Under regularity conditions and random censoring within strata of treatment and baseline variables, the proposed estimator has the following features: (i) it is interpretable under violations of the proportional hazards assumption; (ii) it is consistent and at least as precise as the Kaplan–Meier and inverse probability weighted estimators, under identifiability conditions; (iii) it remains consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) it achieves the nonparametric efficiency bound when both of these distributions are consistently estimated. We illustrate the performance of our method using simulations based on resampling data from a completed, phase 3 randomized clinical trial of a new surgical treatment for stroke; the proposed estimator achieves a 12% gain in relative efficiency compared to the Kaplan–Meier estimator. The proposed estimator has potential advantages over existing approaches for randomized trials with time-to-event outcomes, since existing methods either rely on model assumptions that are untenable in many applications, or lack some of the efficiency and consistency properties (i)–(iv). We focus on estimation of the restricted mean survival time, but our methods may be adapted to estimate any treatment effect measure defined as a smooth contrast between the survival curves for each study arm. We provide R code to implement the estimator.

  相似文献   

9.
We propose a semiparametric estimator for single‐index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non‐parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy‐tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.  相似文献   

10.
Conventional analyses of a composite of multiple time-to-event outcomes use the time to the first event. However, the first event may not be the most important outcome. To address this limitation, generalized pairwise comparisons and win statistics (win ratio, win odds, and net benefit) have become popular and have been applied to clinical trial practice. However, win ratio, win odds, and net benefit have typically been used separately. In this article, we examine the use of these three win statistics jointly for time-to-event outcomes. First, we explain the relation of point estimates and variances among the three win statistics, and the relation between the net benefit and the Mann–Whitney U statistic. Then we explain that the three win statistics are based on the same win proportions, and they test the same null hypothesis of equal win probabilities in two groups. We show theoretically that the Z-values of the corresponding statistical tests are approximately equal; therefore, the three win statistics provide very similar p-values and statistical powers. Finally, using simulation studies and data from a clinical trial, we demonstrate that, when there is no (or little) censoring, the three win statistics can complement one another to show the strength of the treatment effect. However, when the amount of censoring is not small, and without adjustment for censoring, the win odds and the net benefit may have an advantage for interpreting the treatment effect; with adjustment (e.g., IPCW adjustment) for censoring, the three win statistics can complement one another to show the strength of the treatment effect. For calculations we use the R package WINS, available on the CRAN (Comprehensive R Archive Network).  相似文献   

11.
The random censorship model (RCM) is commonly used in biomedical science for modeling life distributions. The popular non-parametric Kaplan–Meier estimator and some semiparametric models such as Cox proportional hazard models are extensively discussed in the literature. In this paper, we propose to fit the RCM with the assumption that the actual life distribution and the censoring distribution have a proportional odds relationship. The parametric model is defined using Marshall–Olkin's extended Weibull distribution. We utilize the maximum-likelihood procedure to estimate model parameters, the survival distribution, the mean residual life function, and the hazard rate as well. The proportional odds assumption is also justified by the newly proposed bootstrap Komogorov–Smirnov type goodness-of-fit test. A simulation study on the MLE of model parameters and the median survival time is carried out to assess the finite sample performance of the model. Finally, we implement the proposed model on two real-life data sets.  相似文献   

12.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models.  相似文献   

13.
The multiple longitudinal outcomes collected in many clinical trials are often analyzed by multilevel item response theory (MLIRT) models. The normality assumption for the continuous outcomes in the MLIRT models can be violated due to skewness and/or outliers. Moreover, patients’ follow-up may be stopped by some terminal events (e.g., death or dropout), which are dependent on the multiple longitudinal outcomes. We proposed a joint modeling framework based on the MLIRT model to account for three data features: skewness, outliers, and dependent censoring. Our method development was motivated by a clinical study for Parkinson’s disease.  相似文献   

14.
When observational data are used to compare treatment-specific survivals, regular two-sample tests, such as the log-rank test, need to be adjusted for the imbalance between treatments with respect to baseline covariate distributions. Besides, the standard assumption that survival time and censoring time are conditionally independent given the treatment, required for the regular two-sample tests, may not be realistic in observational studies. Moreover, treatment-specific hazards are often non-proportional, resulting in small power for the log-rank test. In this paper, we propose a set of adjusted weighted log-rank tests and their supremum versions by inverse probability of treatment and censoring weighting to compare treatment-specific survivals based on data from observational studies. These tests are proven to be asymptotically correct. Simulation studies show that with realistic sample sizes and censoring rates, the proposed tests have the desired Type I error probabilities and are more powerful than the adjusted log-rank test when the treatment-specific hazards differ in non-proportional ways. A real data example illustrates the practical utility of the new methods.  相似文献   

15.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
The Bradley–Terry model is widely and often beneficially used to rank objects from paired comparisons. The underlying assumption that makes ranking possible is the existence of a latent linear scale of merit or equivalently of a kind of transitiveness of the preference. However, in some situations such as sensory comparisons of products, this assumption can be unrealistic. In these contexts, although the Bradley–Terry model appears to be significantly interesting, the linear ranking does not make sense. Our aim is to propose a 2-dimensional extension of the Bradley–Terry model that accounts for interactions between the compared objects. From a methodological point of view, this proposition can be seen as a multidimensional scaling approach in the context of a logistic model for binomial data. Maximum likelihood is investigated and asymptotic properties are derived in order to construct confidence ellipses on the diagram of the 2-dimensional scores. It is shown by an illustrative example based on real sensory data on how to use the 2-dimensional model to inspect the lack-of-fit of the Bradley–Terry model.  相似文献   

17.
This work considers goodness-of-fit for the life test data with hybrid censoring. An alternative representation of the Kolmogorov–Smirnov (KS) statistics is provided under Type-I censoring. The alternative representation leads us to approximate the limiting distributions of the KS statistic as a functional of the Brownian bridge for Type-II, Type-I hybrid, and Type-II hybrid censored data. The approximated distributions are used to obtain the critical values of the tests in this context. We found that the proposed KS test procedure for Type-II censoring has more power than the available one(s) in literature.  相似文献   

18.
Life table analysis techniques in epidemiology depend upon the asymptotic properties of the statistical test methods employed. In some instances, the statistical procedures indicate highly significant results which are, in reality, unjustified. The phenomenon may occur when the asymptotic methods are applied in situations where the cases of interest are few in number. This situation is illustrated by the 20 multiple myeloma deaths observed in the RERF Life Span Study cohort. A permutation test is applied to the life table data, although the test requires the false assumption that the censoring distribution is independent of the radiation dose. A simulation test is developed which does not require equal censoring, which has the same asymptotics as the usual test methods, and which is less likely to overestimate significance in small samples. It is found that both of these small-sample tests provide reasonable numerical solutions. In addition, the simulation test is recommended in general for analyzing life table data with unequal censoring. Finally, by using the small-sample tests, the frequency of death from multiple myeloma is shown to be positively associated with radiation dose (P<0.01).  相似文献   

19.
We focus on regression analysis of irregularly observed longitudinal data which often occur in medical follow-up studies and observational investigations. The model for such data involves two processes: a longitudinal response process of interest and an observation process controlling observation times. Restrictive models and questionable assumptions, such as Poisson assumption and independent censoring time assumption, were posed in previous works for analysing longitudinal data. In this paper, we propose a more general model together with a robust estimation approach for longitudinal data with informative observation times and censoring times, and the asymptotic normalities of the proposed estimators are established. Both simulation studies and real data application indicate that the proposed method is promising.  相似文献   

20.
Consider a life testing experiment in which n units are put on test, successive failure times are recorded, and the observation is terminated either at a specified number r of failures or a specified time T whichever is reached first. This mixture of type I and type II censoring schemes, called hybrid censoring, is of wide use. Under this censoring scheme and the assumption of an exponential life distribution, the distribution of the maximum likelihood estimator of the mean life 6 is derived. It is then used to construct an exact lower confidence bound for θ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号