首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Regression Parameter Estimation from Panel Counts   总被引:1,自引:0,他引:1  
This paper considers a study where each subject may experience multiple occurrences of an event and the rate of the event occurrences is of primary interest. Specifically, we are concerned with the situations where, for each subject, there are only records of the accumulated counts for the event occurrences at a finite number of time points over the study period. Sets of observation times may vary from subject to subject and differ between groups. We model the mean of the event occurrence number over time semiparametrically, and estimate the regression parameter. The proposed estimation procedures are illustrated with data from a bladder cancer study ( Byar, 1980 ). Both asymptotics and simulation studies on the estimators are presented.  相似文献   

2.
In oncology, progression-free survival time, which is defined as the minimum of the times to disease progression or death, often is used to characterize treatment and covariate effects. We are motivated by the desire to estimate the progression time distribution on the basis of data from 780 paediatric patients with choroid plexus tumours, which are a rare brain cancer where disease progression always precedes death. In retrospective data on 674 patients, the times to death or censoring were recorded but progression times were missing. In a prospective study of 106 patients, both times were recorded but there were only 20 non-censored progression times and 10 non-censored survival times. Consequently, estimating the progression time distribution is complicated by the problems that, for most of the patients, either the survival time is known but the progression time is not known, or the survival time is right censored and it is not known whether the patient's disease progressed before censoring. For data with these missingness structures, we formulate a family of Bayesian parametric likelihoods and present methods for estimating the progression time distribution. The underlying idea is that estimating the association between the time to progression and subsequent survival time from patients having complete data provides a basis for utilizing covariates and partial event time data of other patients to infer their missing progression times. We illustrate the methodology by analysing the brain tumour data, and we also present a simulation study.  相似文献   

3.
In incident cohort studies, survival data often include subjects who have experienced an initiate event but have not experienced a subsequent event at the calendar time of recruitment. During the follow-up periods, subjects may undergo a series of successive events. Since the second/third duration process becomes observable only if the first/second event has occurred, the data are subject to left-truncation and dependent censoring. In this article, using the inverse-probability-weighted (IPW) approach, we propose nonparametric estimators for the estimation of the joint survival function of three successive duration times. The asymptotic properties of the proposed estimators are established. The simple bootstrap methods are used to estimate standard deviations and construct interval estimators. A simulation study is conducted to investigate the finite sample properties of the proposed estimators.  相似文献   

4.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

5.
This paper investigates the quantile residual life regression based on semi-competing risk data. Because the terminal event time dependently censors the non-terminal event time, the inference on the non-terminal event time is not available without extra assumption. Therefore, we assume that the non-terminal event time and the terminal event time follow an Archimedean copula. Then, we apply the inverse probability weight technique to construct an estimating equation of quantile residual life regression coefficients. But, the estimating equation may not be continuous in coefficients. Thus, we apply the generalized solution approach to overcome this problem. Since the variance estimation of the proposed estimator is difficult to obtain, we use the bootstrap resampling method to estimate it. From simulations, it shows the performance of the proposed method is well. Finally, we analyze the Bone Marrow Transplant data for illustrations.  相似文献   

6.
The October 2015 precipitation event in the Southeastern United States brought large amounts of rainfall to South Carolina, with particularly heavy amounts in Charleston and Columbia. The subsequent flooding resulted in numerous casualties and hundreds of millions of dollars in property damage. Precipitation levels were so severe that media outlets and government agencies labeled this storm as a 1 in 1000-year event in parts of the state. Two points of discussion emerged as a result of this event. The first was related to understanding the degree to which this event was anomalous; the second was related to understanding whether precipitation extremes in South Carolina have changed over recent time. In this work, 50 years of daily precipitation data at 28 locations are used to fit a spatiotemporal hierarchical model, with the ultimate goal of addressing these two points of discussion. Bayesian inference is used to estimate return levels and to perform a severity-area-frequency analysis, and it is determined that precipitation levels related to this event were atypical throughout much of the state, but were particularly unusual in the Columbia area. This analysis also finds marginal evidence in favor of the claim that precipitation extremes in the Carolinas have become more intense over the last 50 years.  相似文献   

7.
In biostatistical applications interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time C, then the data conforms to the well understood singly-censored current status model, also known as interval censored data, case I. Additional covariates can be used to allow for dependent censoring and to improve estimation of the marginal distribution of T. Assuming a wrong model for the conditional distribution of T, given the covariates, will lead to an inconsistent estimator of the marginal distribution. On the other hand, the nonparametric maximum likelihood estimator of FT requires splitting up the sample in several subsamples corresponding with a particular value of the covariates, computing the NPMLE for every subsample and then taking an average. With a few continuous covariates the performance of the resulting estimator is typically miserable. In van der Laan, Robins (1996) a locally efficient one-step estimator is proposed for smooth functionals of the distribution of T, assuming nothing about the conditional distribution of T, given the covariates, but assuming a model for censoring, given the covariates. The estimators are asymptotically linear if the censoring mechanism is estimated correctly. The estimator also uses an estimator of the conditional distribution of T, given the covariates. If this estimate is consistent, then the estimator is efficient and if it is inconsistent, then the estimator is still consistent and asymptotically normal. In this paper we show that the estimators can also be used to estimate the distribution function in a locally optimal way. Moreover, we show that the proposed estimator can be used to estimate the distribution based on interval censored data (T is now known to lie between two observed points) in the presence of covariates. The resulting estimator also has a known influence curve so that asymptotic confidence intervals are directly available. In particular, one can apply our proposal to the interval censored data without covariates. In Geskus (1992) the information bound for interval censored data with two uniformly distributed monitoring times at the uniform distribution (for T has been computed. We show that the relative efficiency of our proposal w.r.t. this optimal bound equals 0.994, which is also reflected in finite sample simulations. Finally, the good practical performance of the estimator is shown in a simulation study. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

8.
In follow-up studies, survival data often include subjects who have had a certain event at recruitment and may potentially experience a series of subsequent events during the follow-up period. This kind of survival data collected under a cross-sectional sampling criterion is called truncated serial event data. The outcome variables of interest in this paper are serial sojourn times between successive events. To analyze the sojourn times in truncated serial event data, we need to confront two potential sampling biases arising simultaneously from a sampling criterion and induced informative censoring. In this study, nonparametric estimation of the joint probability function of serial sojourn times is developed by using inverse probabilities of the truncation and censoring times as weight functions to accommodate these two sampling biases under various situations of truncation and censoring. Relevant statistical properties of the proposed estimators are also discussed. Simulation studies and two real data are presented to illustrate the proposed methods.  相似文献   

9.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   

10.
Recurrent event data arise in many biomedical and engineering studies when failure events can occur repeatedly over time for each study subject. In this article, we are interested in nonparametric estimation of the hazard function for gap time. A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate. Method for smoothing parameter selection is developed from subject-wise cross-validation. Confidence intervals for the hazard function are derived using the Bayes model of the penalized likelihood. An eigenvalue analysis establishes the asymptotic convergence rates of the relevant estimates. Empirical studies are performed to evaluate various aspects of the method. The proposed technique is demonstrated through an application to the well-known bladder tumor cancer data.  相似文献   

11.
The recent controversy about the size of crowds at candlelight protests in Korea raises an interesting question regarding the methods used to estimate crowd size. Protest organizers tend to count all participants in the event from its start to finish, while the police usually report the crowd size at its peak. While several counting methods are available to estimate the size of a crowd at a given time, counting the total number of the participants at a protest is not straightforward. In this paper, we propose a new estimator to count the total number of participants that we call the size of a dynamic crowd. We assume that the arrival and departure times of the crowd are randomly observed and that the number of the attendees in the crowd at a specific time is estimable. We estimate the number of total attendees during the entire gathering based on the capture-recapture model. We also propose a bootstrap procedure to construct a confidence interval for the crowd size. We demonstrate the performance of the proposed method with simulation studies and the data from Korea''s March for Science, a global event across the world on Earth Day, April 22, 2017.  相似文献   

12.
In this article, a semiparametric approach is proposed for the regression analysis of panel count data. Panel count data commonly arise in clinical trials and demographical studies where the response variable is the number of multiple recurrences of the event of interest and observation times are not fixed, varying from subject to subject. It is assumed that two processes exist in this data: the first is for a recurrent event and the second is for observation time. Many studies have been done to estimate mean function and regression parameters under the independency between recurrent event process and observation time process. In this article, the same statistical inference is studied, but the situation where these two processes may be related is also considered. The mixed Poisson process is applied for the recurrent event processes, and a frailty intensity function for the observation time is also used, respectively. Simulation studies are conducted to study the performance of the suggested methods. The bladder tumor data are applied to compare previous studie' results.  相似文献   

13.
For analyzing recurrent event data, either total time scale or gap time scale is adopted according to research interest. In particular, gap time scale is known to be more appropriate for modeling a renewal process. In this paper, we adopt gap time scale to analyze recurrent event data with repeated observation gaps which cannot be observed completely because of unknown termination times of observation gaps. In order to estimate termination times, interval-censored mechanism is applied. Simulation studies are done to compare the suggested methods with the unadjusted method ignoring incomplete observation gaps. As a real example, conviction data set with suspensions is analyzed with suggested methods.  相似文献   

14.
This paper considers comparison of discrete failure time distributions when the survival time of interest measures elapsed time between two related events and observations on the occurrences of both events could be interval-censored. This kind of data is often referred to as doubly interval-censored failure time data. If the occurrence of the first event defining the survival time can be exactly observed, the data are usually referred to as interval-censored data. For the comparison problem based on interval-censored failure time data, Sun (1996) proposed a nonparametric test procedure. In this paper we generalize the procedure given in Sun (1996) to doubly interval-censored data case and the generalized test is evaluated by simulations.  相似文献   

15.
Current status data arise in studies where the target measurement is the time of occurrence of some event, but observations are limited to indicators of whether or not the event has occurred at the time the sample is collected - only the current status of each individual with respect to event occurrence is observed. Examples of such data arise in several fields, including demography, epidemiology, econometrics and bioassay. Although estimation of the marginal distribution of times of event occurrence is well understood, techniques for incorporating covariate information are not well developed. This paper proposes a semiparametric approach to estimation for regression models of current status data, using techniques from generalized additive modeling and isotonic regression. This procedure provides simultaneous estimates of the baseline distribution of event times and covariate effects. No parametric assumptions about the form of the baseline distribution are required. The results are illustrated using data from a demographic survey of breastfeeding practices in developing countries, and from an epidemiological study of heterosexual Human Immunodeficiency Virus (HIV) transmission. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

16.
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte–Carlo simulations and applied to two clinical trial datasets.  相似文献   

17.
In this paper, the generalized log-gamma regression model is modified to allow the possibility that long-term survivors may be present in the data. This modification leads to a generalized log-gamma regression model with a cure rate, encompassing, as special cases, the log-exponential, log-Weibull and log-normal regression models with a cure rate typically used to model such data. The models attempt to simultaneously estimate the effects of explanatory variables on the timing acceleration/deceleration of a given event and the surviving fraction, that is, the proportion of the population for which the event never occurs. The normal curvatures of local influence are derived under some usual perturbation schemes and two martingale-type residuals are proposed to assess departures from the generalized log-gamma error assumption as well as to detect outlying observations. Finally, a data set from the medical area is analyzed.  相似文献   

18.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

20.
We propose a new cure rate survival model by assuming that the initial number of competing causes of the event of interest follows a Poisson distribution and the time to event has the odd log-logistic generalized half-normal distribution. This survival model describes a realistic interpretation for the biological mechanism of the event of interest. We estimate the model parameters using maximum likelihood. For different sample sizes, various simulation scenarios are performed. We propose the diagnostics and residual analysis to verify the model assumptions. The potentiality of the new cure rate model is illustrated by means of a real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号