首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Thanks to the progress of the methodology for survival analysis of capture-mark-recapture data, today biologists become able to test the individual or environmental factors that are likely to affect survival and, relatedly, they can estimate survival with a model that describes satisfactorily and efficiently the population and the experiment. Assessment of fit, adjustment and model selection are the main tasks in the process. Several computer programs exist with complementary abilities in those respects and, most often, one must use successively several of them in a single analysis. As there is no standardized presentation of the data, the transition from one program to another is not particularly easy. CR is a software package that intends to alleviate those difficulties by putting together some of the most popular programs and providing passageways between them. We explain how a typical analysis is carried out with CR and insist on the flexibility that can be achieved with SURGE, a program for designing and fitting survival models which is an integral part of CR. A real example is treated for illustration.  相似文献   

2.
The study of television audience viewing behavior is very important. The results can provide broadcasters and advertisers useful information to increase the effectiveness of television programming and advertising. Based on hazard rate analysis for survival model, this research develops a new statistical model to fit the diffusion pattern of TV programs, which is a measure of the overall popularity of the program and is used as a criterion to sell the television time. The model helps the decision makers at the networks better understand the acceptance of the show and the underlying behavioral patterns of the viewers. It fits the empirical data in Hong Kong very well and outperforms the existing models. This basic model is then extended to the proportional hazard model to study the covariate effects on the likelihood of an individual watching the program at an earlier stage. Advertisers can benefit from these results in targeting their desired customers.  相似文献   

3.
Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In genetic programming, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. Genetic programming is illustrated via an example of machine learning of the Boolean 11-multiplexer function and symbolic regression of the econometric exchange equation from noisy empirical data.Hierarchical automatic function definition enables genetic programming to define potentially useful functions automatically and dynamically during a run, much as a human programmer writing a complex computer program creates subroutines (procedures, functions) to perform groups of steps which must be performed with different instantiations of the dummy variables (formal parameters) in more than one place in the main program. Hierarchical automatic function definition is illustrated via the machine learning of the Boolean 11-parity function.  相似文献   

4.
Four widely used statistical program packages—BMDP, SPSS, DATATEXT, and OSIRIS—were compared for computational accuracy on sample means, standard deviations, and correlations. Only one, BMDP, was not seriously inaccurate in calculations on a data set of three observations. Further, SPSS computed inaccurate statistics in a discriminant analysis on a real data set of 848 observations. It is recommended that the desk calculator algorithm, found in most of these programs, not be used in packages which may run on short word length machines.  相似文献   

5.
This paper first describes a program AGREE calculating many variants of coefficients for interobserver agreement. A pilot program MOCK was written, aimed at helping unitiated users of AGREE to select the most appropriate coeflcient, given the data type and the research goal. It is a mock-up version of the data entrance and analysis sections of AGREE, to which are added some menus and a knowledge-based CONSULTANT system questioning the user. Results of a small experiment with four variants of the CONSULTANT are presented. This leads to a discussion of desirable features for this kind of help programs and preprocessors for specialized statistical software.  相似文献   

6.
Prostate cancer is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of prostate cancer. Much work on the spatial survival model is based on the proportional hazards model, but few focused on the accelerated failure time model. In this paper, we investigate the prostate cancer data of Louisiana from the SEER program and the violation of the proportional hazards assumption suggests the spatial survival model based on the accelerated failure time model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially-referenced independent or dependent spatial structures. The deviance information criterion (DIC) is used to select a best fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage and geographical distribution are significant in evaluating prostate cancer survival.  相似文献   

7.
Prostate cancer (PrCA) is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of PrCA. Much work on the spatial survival model is based on the proportional hazards (PH) model, but few focused on the accelerated failure time (AFT) model. In this paper, we investigate the PrCA data of Louisiana from the Surveillance, Epidemiology, and End Results program and the violation of the PH assumption suggests that the spatial survival model based on the AFT model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially referenced independent or dependent spatial structures. The deviance information criterion is used to select a best-fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage, and geographical distribution are significant in evaluating PrCA survival.  相似文献   

8.
Non-mixture cure models (NMCMs) are derived from a simplified representation of the biological process that takes place after treatment for cancer. These models are intended to represent the time from the end of treatment to the time of first recurrence of cancer in studies when a proportion of those treated are completely cured. However, for many studies overall survival is also of interest. A two-stage NMCM that estimates the overall survival from a combination of two cure models, one from end of treatment to first recurrence and one from first recurrence to death, is proposed. The model is applied to two studies of Ewing's tumor in young patients. Caution needs to be exercised when extrapolating from cure models fitted to short follow-up times, but these data and associated simulations show how, when follow-up is limited, a two-stage model can give more stable estimates of the cure fraction than a one-stage model applied directly to overall survival.  相似文献   

9.
In most reliability studies involving censoring, one assumes that censoring probabilities are unknown. We derive a nonparametric estimator for the survival function when information regarding censoring frequency is available. The estimator is constructed by adjusting the Nelson–Aalen estimator to incorporate censoring information. Our results indicate significant improvements can be achieved if available information regarding censoring is used. We compare this model to the Koziol–Green model, which is also based on a form of proportional hazards for the lifetime and censoring distributions. Two examples of survival data help to illustrate the differences in the estimation techniques.  相似文献   

10.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

11.
Hierarchical models provide a useful framework for the complexities encountered in policy-relevant research in which the impact of social programs is being assessed. Such complexities include multi-site data, censored data and over-dispersion. In this paper, Bayesian inference through Markov Chain Monte Carlo methods is used for the analysis of a complex hierarchical log-normal model that shows the impact of a managed care strategy aimed at limiting length of hospital stays. Parameters in this model allow for variability in baseline length-of-stay as well as the program effect across hospitals. The authors demonstrate elicitation and sensitivity analysis with respect to prior distributions. All calculations for the posterior and predictive distributions were obtained using the software BUGS.  相似文献   

12.
13.
The Weibull, log-logistic and log-normal distributions are extensively used to model time-to-event data. The Weibull family accommodates only monotone hazard rates, whereas the log-logistic and log-normal are widely used to model unimodal hazard functions. The increasing availability of lifetime data with a wide range of characteristics motivate us to develop more flexible models that accommodate both monotone and nonmonotone hazard functions. One such model is the exponentiated Weibull distribution which not only accommodates monotone hazard functions but also allows for unimodal and bathtub shape hazard rates. This distribution has demonstrated considerable potential in univariate analysis of time-to-event data. However, the primary focus of many studies is rather on understanding the relationship between the time to the occurrence of an event and one or more covariates. This leads to a consideration of regression models that can be formulated in different ways in survival analysis. One such strategy involves formulating models for the accelerated failure time family of distributions. The most commonly used distributions serving this purpose are the Weibull, log-logistic and log-normal distributions. In this study, we show that the exponentiated Weibull distribution is closed under the accelerated failure time family. We then formulate a regression model based on the exponentiated Weibull distribution, and develop large sample theory for statistical inference. We also describe a Bayesian approach for inference. Two comparative studies based on real and simulated data sets reveal that the exponentiated Weibull regression can be valuable in adequately describing different types of time-to-event data.  相似文献   

14.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

15.
The Cox proportional hazards model has become the standard model for survival analysis. It is often seen as the null model in that "... explicit excuses are now needed to use different models" (Keiding, Proceedings of the XIXth International Biometric Conference, Cape Town, 1998). However, converging hazards also occur frequently in survival analysis. The Burr model, which may be derived as the marginal from a gamma frailty model, is one commonly used tool to model converging hazards. We outline this approach and introduce a mixed model which extends the Burr model and allows for both proportional and converging hazards. Although a semi-parametric model in its own right, we demonstrate how the mixed model can be derived via a gamma frailty interpretation, suggesting an E-M fitting procedure. We illustrate the modelling techniques using data on survival of hospice patients.  相似文献   

16.
ABSTRACT

We split the components corresponding to the disability-free survival probability, and the disability survival probability. Our analysis is conducted for men and women separately, for age groups over 64 years. We discuss the estimation of a multiple state model under several scenarios when only a single survey of cross-sectional data is available. The conclusions are used to discuss the disability level of the Spanish elderly population and are helpful to develop welfare programs and insurance products.  相似文献   

17.
Changes in survival rates during 1940–1992 for patients with Hodgkin's disease are studied by using population-based data. The aim of the analysis is to identify when the breakthrough in clinical trials of chemotherapy treatments started to increase population survival rates, and to find how long it took for the increase to level off, indicating that the full population effect of the breakthrough had been realized. A Weibull relative survival model is used because the model parameters are easily interpretable when assessing the effect of advances in clinical trials. However, the methods apply to any relative survival model that falls within the generalized linear models framework. The model is fitted by using modifications of existing software (SAS, GLIM) and profile likelihood methods. The results are similar to those from a cause-specific analysis of the data by Feuer and co-workers. Survival started to improve around the time that a major chemotherapy breakthrough (nitrogen mustard, Oncovin, prednisone and procarbazine) was publicized in the mid 1960s but did not level off for 11 years. For the analysis of data where the cause of death is obtained from death certificates, the relative survival approach has the advantage of providing the necessary adjustment for expected mortality from causes other than the disease without requiring information on the causes of death.  相似文献   

18.
Summary. The Cox proportional hazards model, which is widely used for the analysis of treatment and prognostic effects with censored survival data, makes the assumption that the hazard ratio is constant over time. Nonparametric estimators have been developed for an extended model in which the hazard ratio is allowed to change over time. Estimators based on residuals are appealing as they are easy to use and relate in a simple way to the more restricted Cox model estimator. After fitting a Cox model and calculating the residuals, one can obtain a crude estimate of the time-varying coefficients by adding a smooth of the residuals to the initial (constant) estimate. Treating the crude estimate as the fit, one can re-estimate the residuals. Iteration leads to consistent estimation of the nonparametric time-varying coefficients. This approach leads to clear guidelines for residual analysis in applications. The results are illustrated by an analysis of the Medical Research Council's myeloma trials, and by simulation.  相似文献   

19.
We analyse a flexible parametric estimation technique for a competing risks (CR) model with unobserved heterogeneity, by extending a local mixed proportional hazard single risk model for continuous duration time to a local mixture CR (LMCR) model for discrete duration time. The state-specific local hazard function for the LMCR model is per definition a valid density function if we have either one or two destination states. We conduct Monte Carlo experiments to compare the estimated parameters of the LMCR model, and to compare the estimated parameters of a CR model based on a Heckman–Singer-type (HS-type) technique, with the data-generating process parameters. The Monte Carlo results show that the LMCR model performs better or at least as good as the HS-type model with respect to the estimated structure parameters in most of the cases, but relatively poorer with respect to the estimated duration-dependence parameters.  相似文献   

20.
Survivaldata may include two different sources of variation, namely variationover time and variation over units. If both of these variationsare present, neglecting one of them can cause serious bias inthe estimations. Here we present an approach for discrete durationdata that includes both time–varying and unit–specificeffects to model these two variations simultaneously. The approachis a combination of a dynamic survival model with dynamic time–varyingbaseline and covariate effects and a frailty model measuringunobserved heterogeneity with random effects varying independentlyover units. Estimation is based on posterior modes, i.e., wemaximize the joint posterior distribution of the unknown parametersto avoid numerical integration and simulation techniques, thatare necessary in a full Bayesian analysis. Estimation of unknownhyperparameters is achieved by an EM–type algorithm. Finally,the proposed method is applied to data of the Veteran's AdministrationLung Cancer Trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号