首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 326 毫秒
1.
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant.  相似文献   

2.
According to the Atlas of Human Development in Brazil, the income dimension of Municipal Human Development Index (MHDI-I) is an indicator that shows the population''s ability in a municipality to ensure a minimum standard of living to provide their basic needs, such as water, food and shelter. In public policy, one of the research objectives is to identify social and economic variables that are associated with this index. Due to the income inequality, evaluate these associations in quantiles, instead of the mean, could be more interest. Thus, in this paper, we develop a Bayesian variable selection in quantile regression models with hierarchical random effects. In particular, we assume a likelihood function based on the Generalized Asymmetric Laplace distribution, and a spike-and-slab prior is used to perform variable selection. The Generalized Asymmetric Laplace distribution is a more general alternative than the Asymmetric Laplace one, which is a common approach used in quantile regression under the Bayesian paradigm. The performance of the proposed method is evaluated via a comprehensive simulation study, and it is applied to the MHDI-I from municipalities located in the state of Rio de Janeiro.  相似文献   

3.
In this paper, we develop Bayesian methodology and computational algorithms for variable subset selection in Cox proportional hazards models with missing covariate data. A new joint semi-conjugate prior for the piecewise exponential model is proposed in the presence of missing covariates and its properties are examined. The covariates are assumed to be missing at random (MAR). Under this new prior, a version of the Deviance Information Criterion (DIC) is proposed for Bayesian variable subset selection in the presence of missing covariates. Monte Carlo methods are developed for computing the DICs for all possible subset models in the model space. A Bone Marrow Transplant (BMT) dataset is used to illustrate the proposed methodology.  相似文献   

4.
Truncation is a known feature of bone marrow transplant (BMT) registry data, for which the survival time of a leukemia patient is left truncated by the waiting time to transplant. It was recently noted that a longer waiting time was linked to poorer survival. A straightforward solution is a Cox model on the survival time with the waiting time as both truncation variable and covariate. The Cox model should also include other recognized risk factors as covariates. In this article, we focus on estimating the distribution function of waiting time and the probability of selection under the aforementioned Cox model.  相似文献   

5.
Abstract

In this paper, we propose a hybrid method to estimate the baseline hazard for Cox proportional hazard model. In the proposed method, the nonparametric estimate of the survival function by Kaplan Meier, and the parametric estimate of the logistic function in the Cox proportional hazard by partial likelihood method are combined to estimate a parametric baseline hazard function. We compare the estimated baseline hazard using the proposed method and the Cox model. The results show that the estimated baseline hazard using hybrid method is improved in comparison with estimated baseline hazard using the Cox model. The performance of each method is measured based on the estimated parameters of the baseline distribution as well as goodness of fit of the model. We have used real data as well as simulation studies to compare performance of both methods. Monte Carlo simulations carried out in order to evaluate the performance of the proposed method. The results show that the proposed hybrid method provided better estimate of the baseline in comparison with the estimated values by the Cox model.  相似文献   

6.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   

7.
In the causal analysis of survival data a time-based response is related to a set of explanatory variables. Definition of the relation between the time and the covariates may become a difficult task, particularly in the preliminary stage, when the information is limited. Through a nonparametric approach, we propose to estimate the survival function allowing to evaluate the relative importance of each potential explanatory variable, in a simple and explanatory fashion. To achieve this aim, each of the explanatory variables is used to partition the observed survival times. The observations are assumed to be partially exchangeable according to such partition. We then consider, conditionally on each partition, a hierarchical nonparametric Bayesian model on the hazard functions. We define and compare different prior distribution for the hazard functions.  相似文献   

8.
Abstract

In general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper.  相似文献   

9.
Due to computational challenges and non-availability of conjugate prior distributions, Bayesian variable selection in quantile regression models is often a difficult task. In this paper, we address these two issues for quantile regression models. In particular, we develop an informative stochastic search variable selection (ISSVS) for quantile regression models that introduces an informative prior distribution. We adopt prior structures which incorporate historical data into the current data by quantifying them with a suitable prior distribution on the model parameters. This allows ISSVS to search more efficiently in the model space and choose the more likely models. In addition, a Gibbs sampler is derived to facilitate the computation of the posterior probabilities. A major advantage of ISSVS is that it avoids instability in the posterior estimates for the Gibbs sampler as well as convergence problems that may arise from choosing vague priors. Finally, the proposed methods are illustrated with both simulation and real data.  相似文献   

10.
The random censorship model (RCM) is commonly used in biomedical science for modeling life distributions. The popular non-parametric Kaplan–Meier estimator and some semiparametric models such as Cox proportional hazard models are extensively discussed in the literature. In this paper, we propose to fit the RCM with the assumption that the actual life distribution and the censoring distribution have a proportional odds relationship. The parametric model is defined using Marshall–Olkin's extended Weibull distribution. We utilize the maximum-likelihood procedure to estimate model parameters, the survival distribution, the mean residual life function, and the hazard rate as well. The proportional odds assumption is also justified by the newly proposed bootstrap Komogorov–Smirnov type goodness-of-fit test. A simulation study on the MLE of model parameters and the median survival time is carried out to assess the finite sample performance of the model. Finally, we implement the proposed model on two real-life data sets.  相似文献   

11.
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semi-parametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coefficients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coefficients and the model space, they propose a semi-automatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology.  相似文献   

12.
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk scenario. The properties of the proposed distribution are discussed, including a formal proof of its density function and an explicit algebraic formulae for its quantiles and survival and hazard functions. Also, we have discussed inference aspects of the model proposed via Bayesian inference by using Markov chain Monte Carlo simulation. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumptions of non-informative priors. Further, some discussions on models selection criteria are given. The developed methodology is illustrated on a real data set.  相似文献   

13.
Abstract

In some clinical, environmental, or economical studies, researchers are interested in a semi-continuous outcome variable which takes the value zero with a discrete probability and has a continuous distribution for the non-zero values. Due to the measuring mechanism, it is not always possible to fully observe some outcomes, and only an upper bound is recorded. We call this left-censored data and observe only the maximum of the outcome and an independent censoring variable, together with an indicator. In this article, we introduce a mixture semi-parametric regression model. We consider a parametric model to investigate the influence of covariates on the discrete probability of the value zero. For the non-zero part of the outcome, a semi-parametric Cox’s regression model is used to study the conditional hazard function. The different parameters in this mixture model are estimated using a likelihood method. Hereby the infinite dimensional baseline hazard function is estimated by a step function. As results, we show the identifiability and the consistency of the estimators for the different parameters in the model. We study the finite sample behaviour of the estimators through a simulation study and illustrate this model on a practical data example.  相似文献   

14.
We introduce a general class of semiparametric hazard regression models, called extended hazard (EH) models, that are designed to accommodate various survival schemes with time-dependent covariates. The EH model contains both the Cox model and the accelerated failure time (AFT) model as its subclasses so that we can use this nested structure to perform model selection between the Cox model and the AFT model. A class of estimating equations using counting process and martingale techniques is developed to estimate the regression parameters of the proposed model. The performance of the estimating procedure and the impact of model misspecification are assessed through simulation studies. Two data examples, Stanford heart transplant data and Mediterranean fruit flies, egg-laying data, are used to demonstrate the usefulness of the EH model.  相似文献   

15.

Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed in the literature which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject’s event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose a Bayesian model which loosens assumptions on the mixing distribution inherent in the random effects FHT models currently in use. We demonstrate via simulation study that the proposed model greatly improves both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodology to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with both the Cox model and two popular FHT models.

  相似文献   

16.
Research on methods for studying time-to-event data (survival analysis) has been extensive in recent years. The basic model in use today represents the hazard function for an individual through a proportional hazards model (Cox, 1972). Typically, it is assumed that a covariate's effect on the hazard function is constant throughout the course of the study. In this paper we propose a method to allow for possible deviations from the standard Cox model, by allowing the effect of a covariate to vary over time. This method is based on a dynamic linear model. We present our method in terms of a Bayesian hierarchical model. We fit the model to the data using Markov chain Monte Carlo methods. Finally, we illustrate the approach with several examples. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

17.
The aim of this study is to apply the Bayesian method of identifying optimal experimental designs to a toxicokinetic-toxicodynamic model that describes the response of aquatic organisms to time dependent concentrations of toxicants. As for experimental designs, we restrict ourselves to pulses and constant concentrations. A design of an experiment is called optimal within this set of designs if it maximizes the expected gain of knowledge about the parameters. Focus is on parameters that are associated with the auxiliary damage variable of the model that can only be inferred indirectly from survival time series data. Gain of knowledge through an experiment is quantified both with the ratio of posterior to prior variances of individual parameters and with the entropy of the posterior distribution relative to the prior on the whole parameter space. The numerical methods developed to calculate expected gain of knowledge are expected to be useful beyond this case study, in particular for multinomially distributed data such as survival time series data.  相似文献   

18.
Bayesian selection of variables is often difficult to carry out because of the challenge in specifying prior distributions for the regression parameters for all possible models, specifying a prior distribution on the model space and computations. We address these three issues for the logistic regression model. For the first, we propose an informative prior distribution for variable selection. Several theoretical and computational properties of the prior are derived and illustrated with several examples. For the second, we propose a method for specifying an informative prior on the model space, and for the third we propose novel methods for computing the marginal distribution of the data. The new computational algorithms only require Gibbs samples from the full model to facilitate the computation of the prior and posterior model probabilities for all possible models. Several properties of the algorithms are also derived. The prior specification for the first challenge focuses on the observables in that the elicitation is based on a prior prediction y 0 for the response vector and a quantity a 0 quantifying the uncertainty in y 0. Then, y 0 and a 0 are used to specify a prior for the regression coefficients semi-automatically. Examples using real data are given to demonstrate the methodology.  相似文献   

19.
In this article, we develop a model to study treatment, period, carryover, and other applicable effects in a crossover design with a time-to-event response variable. Because time-to-event outcomes on different treatment regimens within the crossover design are correlated for an individual, we adopt a proportional hazards frailty model. If the frailty is assumed to have a gamma distribution, and the hazard rates are piecewise constant, then the likelihood function can be determined via closed-form expressions. We illustrate the methodology via an application to a data set from an asthma clinical trial and run simulations that investigate sensitivity of the model to data generated from different distributions.  相似文献   

20.
The aim of this paper is to develop a Bayesian local influence method (Zhu et al. 2009, submitted) for assessing minor perturbations to the prior, the sampling distribution, and individual observations in survival analysis. We introduce a perturbation model to characterize simultaneous (or individual) perturbations to the data, the prior distribution, and the sampling distribution. We construct a Bayesian perturbation manifold to the perturbation model and calculate its associated geometric quantities including the metric tensor to characterize the intrinsic structure of the perturbation model (or perturbation scheme). We develop local influence measures based on several objective functions to quantify the degree of various perturbations to statistical models. We carry out several simulation studies and analyze two real data sets to illustrate our Bayesian local influence method in detecting influential observations, and for characterizing the sensitivity to the prior distribution and hazard function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号