首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

2.
GARCH models include most of the stylized facts of financial time series and they have been largely used to analyse discrete financial time series. In the last years, continuous-time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behaviour of some NASDAQ time series.  相似文献   

3.
Private and common values (CVs) are the two main competing valuation models in auction theory and empirical work. In the framework of second-price auctions, we compare the empirical performance of the independent private value (IPV) model to the CV model on a number of different dimensions, both on real data from eBay coin auctions and on simulated data. Both models fit the eBay data well with a slight edge for the CV model. However, the differences between the fit of the models seem to depend to some extent on the complexity of the models. According to log predictive score the IPV model predicts auction prices slightly better in most auctions, while the more robust CV model is much better at predicting auction prices in more unusual auctions. In terms of posterior odds, the CV model is clearly more supported by the eBay data.  相似文献   

4.
In this paper, we study the properties of a special class of frailty models when the frailty is common to several failure times. The models are closely linked to Archimedean copula models. We establish a useful formula for cumulative baseline hazard functions and develop a new estimator for cumulative baseline hazard functions in bivariate frailty regression models. Based on our proposed estimator, we present a graphical model checking procedure. We fit a leukemia data set using our model and end our paper with some discussions.  相似文献   

5.
Gaussian mixture model-based clustering is now a standard tool to determine a hypothetical underlying structure in continuous data. However, many usual parsimonious models, despite either their appealing geometrical interpretation or their ability to deal with high dimensional data, suffer from major drawbacks due to scale dependence or unsustainability of the constraints after projection. In this work we present a new family of parsimonious Gaussian models based on a variance-correlation decomposition of the covariance matrices. These new models are stable when projected into the canonical planes and, so, faithfully representable in low dimension. They are also stable by modification of the measurement units of the data and such a modification does not change the model selection based on likelihood criteria. We highlight all these stability properties by a specific graphical representation of each model. A detailed Generalized EM (GEM) algorithm is also provided for every model inference. Then, on biological and geological data, we compare our stable models to standard ones (geometrical models and factor analyzer models), which underlines all the profit to obtain unit-free models.  相似文献   

6.
The joint modeling of longitudinal and survival data has received extraordinary attention in the statistics literature recently, with models and methods becoming increasingly more complex. Most of these approaches pair a proportional hazards survival with longitudinal trajectory modeling through parametric or nonparametric specifications. In this paper we closely examine one data set previously analyzed using a two parameter parametric model for Mediterranean fruit fly (medfly) egg-laying trajectories paired with accelerated failure time and proportional hazards survival models. We consider parametric and nonparametric versions of these two models, as well as a proportional odds rate model paired with a wide variety of longitudinal trajectory assumptions reflecting the types of analyses seen in the literature. In addition to developing novel nonparametric Bayesian methods for joint models, we emphasize the importance of model selection from among joint and non joint models. The default in the literature is to omit at the outset non joint models from consideration. For the medfly data, a predictive diagnostic criterion suggests that both the choice of survival model and longitudinal assumptions can grossly affect model adequacy and prediction. Specifically for these data, the simple joint model used in by Tseng et al. (Biometrika 92:587–603, 2005) and models with much more flexibility in their longitudinal components are predictively outperformed by simpler analyses. This case study underscores the need for data analysts to compare on the basis of predictive performance different joint models and to include non joint models in the pool of candidates under consideration.  相似文献   

7.
When the results of biological experiments are tested for a possible difference between treatment and control groups, the inference is only valid if based upon a model that fits the experimental results satisfactorily. In dominant-lethal testing, foetal death has previously been assumed to follow a variety of models, including a Poisson, Binomial, Beta-binomial and various mixture models. However, discriminating between models has always been a particularly difficult problem. In this paper, we consider the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them. We adopt a Bayesian approach and illustrate how a variety of different models may be considered, using Markov chain Monte Carlo (MCMC) simulation techniques and comparing the results with the corresponding maximum likelihood analyses. We present an auxiliary variable method for determining the probability that any particular data cell is assigned to a given component in a mixture and we illustrate the value of this approach. Finally, we show how the Bayesian approach provides a natural and unique perspective on the model selection problem via reversible jump MCMC and illustrate how probabilities associated with each of the different models may be calculated for each data set. In terms of estimation we show how, by averaging over the different models, we obtain reliable and robust inference for any statistic of interest.  相似文献   

8.
Two types of state-switching models for U.S. real output have been proposed: models that switch randomly between states and models that switch states deterministically, as in the threshold autoregressive model of Potter. These models have been justified primarily on how well they fit the sample data, yielding statistically significant estimates of the model coefficients. Here we propose a new approach to the evaluation of an estimated nonlinear time series model that provides a complement to existing methods based on in-sample fit or on out-of-sample forecasting. In this new approach, a battery of distinct nonlinearity tests is applied to the sample data, resulting in a set of p-values for rejecting the null hypothesis of a linear generating mechanism. This set of p-values is taken to be a “stylized fact” characterizing the nonlinear serial dependence in the generating mechanism of the time series. The effectiveness of an estimated nonlinear model for this time series is then evaluated in terms of the congruence between this stylized fact and a set of nonlinearity test results obtained from data simulated using the estimated model. In particular, we derive a portmanteau statistic based on this set of nonlinearity test p-values that allows us to test the proposition that a given model adequately captures the nonlinear serial dependence in the sample data. We apply the method to several estimated state-switching models of U.S. real output.  相似文献   

9.
The interval-censored survival data appear very frequently, where the event of interest is not observed exactly but it is only known to occur within some time interval. In this paper, we propose a location-scale regression model based on the log-generalized gamma distribution for modelling interval-censored data. We shall be concerned only with parametric forms. The proposed model for interval-censored data represents a parametric family of models that has, as special submodels, other regression models which are broadly used in lifetime data analysis. Assuming interval-censored data, we consider a frequentist analysis, a Jackknife estimator and a non-parametric bootstrap for the model parameters. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and present some techniques to perform global influence.  相似文献   

10.
Zero inflated Poisson regression is a model commonly used to analyze data with excessive zeros. Although many models have been developed to fit zero-inflated data, most of them strongly depend on the special features of the individual data. For example, there is a need for new models when dealing with truncated and inflated data. In this paper, we propose a new model that is sufficiently flexible to model inflation and truncation simultaneously, and which is a mixture of a multinomial logistic and a truncated Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts. The truncated Poisson regression models the counts that are assumed to follow a truncated Poisson distribution. The performance of our proposed model is evaluated through simulation studies, and our model is found to have the smallest mean absolute error and best model fit. In the empirical example, the data are truncated with inflated values of zero and fourteen, and the results show that our model has a better fit than the other competing models.  相似文献   

11.
SOME MODELS FOR OVERDISPERSED BINOMIAL DATA   总被引:1,自引:0,他引:1  
Various models are currently used to model overdispersed binomial data. It is not always clear which model is appropriate for a given situation. Here we examine the assumptions and discuss the problems and pitfalls of some of these models. We focus on clustered data with one level of nesting, briefly touching on more complex strata and longitudinal data. The estimation procedures are illustrated and some critical comments are made about the various models. We indicate which models are restrictive and how and which can be extended to model more complex situations. In addition some inadequacies in testing procedures are noted. Recommendations as to which models should be used, and when, are made.  相似文献   

12.
In recent years, zero-inflated count data models, such as zero-inflated Poisson (ZIP) models, are widely used as the count data with extra zeros are very common in many practical problems. In order to model the correlated count data which are either clustered or repeated and to assess the effects of continuous covariates or of time scales in a flexible way, a class of semiparametric mixed-effects models for zero-inflated count data is considered. In this article, we propose a fully Bayesian inference for such models based on a data augmentation scheme that reflects both random effects of covariates and mixture of zero-inflated distribution. A computational efficient MCMC method which combines the Gibbs sampler and M-H algorithm is implemented to obtain the estimate of the model parameters. Finally, a simulation study and a real example are used to illustrate the proposed methodologies.  相似文献   

13.
We use the local influence approach to develop influence measures for identifying observations that strike a disproportionate effect on the maximum likelihood estimate of parameters in models for lifetime data. The proposed method for developing influence measures can be applied to a wide variety of models and we use the exponential model to illustrate the details. In particular, we show that the proposed measure is equivalent to the martingale residual under the exponential model.  相似文献   

14.
Survival models have been extensively used to analyse time-until-event data. There is a range of extended models that incorporate different aspects, such as overdispersion/frailty, mixtures, and flexible response functions through semi-parametric models. In this work, we show how a useful tool to assess goodness-of-fit, the half-normal plot of residuals with a simulated envelope, implemented in the hnp package in R, can be used on a location-scale modelling context. We fitted a range of survival models to time-until-event data, where the event was an insect predator attacking a larva in a biological control experiment. We started with the Weibull model and then fitted the exponentiated-Weibull location-scale model with regressors both for the location and scale parameters. We performed variable selection for each model and, by producing half-normal plots with simulated envelopes for the deviance residuals of the model fits, we found that the exponentiated-Weibull fitted the data better. We then included a random effect in the exponentiated-Weibull model to accommodate correlated observations. Finally, we discuss possible implications of the results found in the case study.  相似文献   

15.
Due to computational challenges and non-availability of conjugate prior distributions, Bayesian variable selection in quantile regression models is often a difficult task. In this paper, we address these two issues for quantile regression models. In particular, we develop an informative stochastic search variable selection (ISSVS) for quantile regression models that introduces an informative prior distribution. We adopt prior structures which incorporate historical data into the current data by quantifying them with a suitable prior distribution on the model parameters. This allows ISSVS to search more efficiently in the model space and choose the more likely models. In addition, a Gibbs sampler is derived to facilitate the computation of the posterior probabilities. A major advantage of ISSVS is that it avoids instability in the posterior estimates for the Gibbs sampler as well as convergence problems that may arise from choosing vague priors. Finally, the proposed methods are illustrated with both simulation and real data.  相似文献   

16.
Multivariate Poisson regression with covariance structure   总被引:1,自引:0,他引:1  
In recent years the applications of multivariate Poisson models have increased, mainly because of the gradual increase in computer performance. The multivariate Poisson model used in practice is based on a common covariance term for all the pairs of variables. This is rather restrictive and does not allow for modelling the covariance structure of the data in a flexible way. In this paper we propose inference for a multivariate Poisson model with larger structure, i.e. different covariance for each pair of variables. Maximum likelihood estimation, as well as Bayesian estimation methods are proposed. Both are based on a data augmentation scheme that reflects the multivariate reduction derivation of the joint probability function. In order to enlarge the applicability of the model we allow for covariates in the specification of both the mean and the covariance parameters. Extension to models with complete structure with many multi-way covariance terms is discussed. The method is demonstrated by analyzing a real life data set.  相似文献   

17.
Additive varying coefficient models are a natural extension of multiple linear regression models, allowing the regression coefficients to be functions of other variables. Therefore these models are more flexible to model more complex dependencies in data structures. In this paper we consider the problem of selecting in an automatic way the significant variables among a large set of variables, when the interest is on a given response variable. In recent years several grouped regularization methods have been proposed and in this paper we present these under one unified framework in this varying coefficient model context. For each of the discussed grouped regularization methods we investigate the optimization problem to be solved, possible algorithms for doing so, and the variable and estimation consistency of the methods. We investigate the finite-sample performance of these methods, in a comparative study, and illustrate them on real data examples.  相似文献   

18.
The implementation of the Bayesian paradigm to model comparison can be problematic. In particular, prior distributions on the parameter space of each candidate model require special care. While it is well known that improper priors cannot be routinely used for Bayesian model comparison, we claim that also the use of proper conventional priors under each model should be regarded as suspicious, especially when comparing models having different dimensions. The basic idea is that priors should not be assigned separately under each model; rather they should be related across models, in order to acquire some degree of compatibility, and thus allow fairer and more robust comparisons. In this connection, the intrinsic prior as well as the expected posterior prior (EPP) methodology represent a useful tool. In this paper we develop a procedure based on EPP to perform Bayesian model comparison for discrete undirected decomposable graphical models, although our method could be adapted to deal also with directed acyclic graph models. We present two possible approaches. One based on imaginary data, and one which makes use of a limited number of actual data. The methodology is illustrated through the analysis of a 2×3×4 contingency table.  相似文献   

19.
We consider fitting Emax models to the primary endpoint for a parallel group dose–response clinical trial. Such models can be difficult to fit using Maximum Likelihood if the data give little information about the maximum possible response. Consequently, we consider alternative models that can be derived as limiting cases, which can usually be fitted. Furthermore we propose two model selection procedures for choosing between the different models. These model selection procedures are compared with two model selection procedures which have previously been used. In a simulation study we find that the model selection procedure that performs best depends on the underlying true situation. One of the new model selection procedures gives what may be regarded as the most robust of the procedures.  相似文献   

20.
In this study, an evaluation of Bayesian hierarchical models is made based on simulation scenarios to compare single-stage and multi-stage Bayesian estimations. Simulated datasets of lung cancer disease counts for men aged 65 and older across 44 wards in the London Health Authority were analysed using a range of spatially structured random effect components. The goals of this study are to determine which of these single-stage models perform best given a certain simulating model, how estimation methods (single- vs. multi-stage) compare in yielding posterior estimates of fixed effects in the presence of spatially structured random effects, and finally which of two spatial prior models – the Leroux or ICAR model, perform best in a multi-stage context under different assumptions concerning spatial correlation. Among the fitted single-stage models without covariates, we found that when there is low amount of variability in the distribution of disease counts, the BYM model is relatively robust to misspecification in terms of DIC, while the Leroux model is the least robust to misspecification. When these models were fit to data generated from models with covariates, we found that when there was one set of covariates – either spatially correlated or non-spatially correlated, changing the values of the fixed coefficients affected the ability of either the Leroux or ICAR model to fit the data well in terms of DIC. When there were multiple sets of spatially correlated covariates in the simulating model, however, we could not distinguish the goodness of fit to the data between these single-stage models. We found that the multi-stage modelling process via the Leroux and ICAR models generally reduced the variance of the posterior estimated fixed effects for data generated from models with covariates and a UH term compared to analogous single-stage models. Finally, we found the multi-stage Leroux model compares favourably to the multi-stage ICAR model in terms of DIC. We conclude that the mutli-stage Leroux model should be seriously considered in applications of Bayesian disease mapping when an investigator desires to fit a model with both fixed effects and spatially structured random effects to Poisson count data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号