首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In this paper, we consider the Clayton–Oakes model with marginal proportional hazards and use the full model structure to improve on efficiency compared with the independence analysis. We derive a likelihood based estimating equation for the regression parameters as well as for the correlation parameter of the model. We give the large sample properties of the estimators arising from this estimating equation. Finally, we investigate the small sample properties of the estimators through Monte Carlo simulations.  相似文献   

2.
Summary.  Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable to use the marginal additive hazards model. We derive asymptotic properties of the Lin and Ying estimators for the marginal additive hazards model for multivariate failure time data. Furthermore we suggest estimating equations for the regression parameters and association parameters in parametric shared frailty models with marginal additive hazards by using the Lin and Ying estimators. We give the large sample properties of the estimators arising from these estimating equations and investigate their small sample properties by Monte Carlo simulation. A real example is provided for illustration.  相似文献   

3.
Gu MG  Sun L  Zuo G 《Lifetime data analysis》2005,11(4):473-488
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.  相似文献   

4.
Abstract.  We consider robust methods of likelihood and frequentist inference for the nonlinear parameter, say α , in conditionally linear nonlinear regression models. We derive closed-form expressions for robust conditional, marginal, profile and modified profile likelihood functions for α under elliptically contoured data distributions. Next, we develop robust exact-F confidence intervals for α and consider robust Fieller intervals for ratios of regression parameters in linear models. Several well-known examples are considered and Monte Carlo simulation results are presented.  相似文献   

5.
The proportional reversed hazards model explains the multiplicative effect of covariates on the baseline reversed hazard rate function of lifetimes. In the present study, we introduce a proportional cause-specific reversed hazards model. The proposed regression model facilitates the analysis of failure time data with multiple causes of failure under left censoring. We estimate the regression parameters using a partial likelihood approach. We provide Breslow's type estimators for the cumulative cause-specific reversed hazard rate functions. Asymptotic properties of the estimators are discussed. Simulation studies are conducted to assess their performance. We illustrate the applicability of the proposed model using a real data set.  相似文献   

6.
We consider a semiparametric and a parametric transformation-to-normality model for bivariate data. After an unstructured or structured monotone transformation of the measurement scales, the measurements are assumed to have a bivariate normal distribution with correlation coefficient ρ, here termed the 'transformation correlation coefficient'. Under the semiparametric model with unstructured transformation, the principle of invariance leads to basing inference on the marginal ranks. The resulting rank-based likelihood function of ρis maximized via a Monte Carlo procedure. Under the parametric model, we consider Box-Cox type transformations and maximize the likelihood of ρalong with the nuisance parameters. Efficiencies of competing methods are reported, both theoretically and by simulations. The methods are illustrated on a real-data example.  相似文献   

7.
Abstract.  Mixed model based approaches for semiparametric regression have gained much interest in recent years, both in theory and application. They provide a unified and modular framework for penalized likelihood and closely related empirical Bayes inference. In this article, we develop mixed model methodology for a broad class of Cox-type hazard regression models where the usual linear predictor is generalized to a geoadditive predictor incorporating non-parametric terms for the (log-)baseline hazard rate, time-varying coefficients and non-linear effects of continuous covariates, a spatial component, and additional cluster-specific frailties. Non-linear and time-varying effects are modelled through penalized splines, while spatial components are treated as correlated random effects following either a Markov random field or a stationary Gaussian random field prior. Generalizing existing mixed model methodology, inference is derived using penalized likelihood for regression coefficients and (approximate) marginal likelihood for smoothing parameters. In a simulation we study the performance of the proposed method, in particular comparing it with its fully Bayesian counterpart using Markov chain Monte Carlo methodology, and complement the results by some asymptotic considerations. As an application, we analyse leukaemia survival data from northwest England.  相似文献   

8.
We propose a robust likelihood approach for the Birnbaum–Saunders regression model under model misspecification, which provides full likelihood inferences about regression parameters without knowing the true random mechanisms underlying the data. Monte Carlo simulation experiments and analysis of real data sets are carried out to illustrate the efficacy of the proposed robust methodology.  相似文献   

9.
We propose a new class of continuous distributions with two extra shape parameters named the generalized odd log-logistic family of distributions. The proposed family contains as special cases the proportional reversed hazard rate and odd log-logistic classes. Its density function can be expressed as a linear combination of exponentiated densities based on the same baseline distribution. Some of its mathematical properties including ordinary moments, quantile and generating functions, two entropy measures and order statistics are obtained. We derive a power series for the quantile function. We discuss the method of maximum likelihood to estimate the model parameters. We study the behaviour of the estimators by means of Monte Carlo simulations. We introduce the log-odd log-logistic Weibull regression model with censored data based on the odd log-logistic-Weibull distribution. The importance of the new family is illustrated using three real data sets. These applications indicate that this family can provide better fits than other well-known classes of distributions. The beauty and importance of the proposed family lies in its ability to model different types of real data.  相似文献   

10.
Generalized linear models with random effects and/or serial dependence are commonly used to analyze longitudinal data. However, the computation and interpretation of marginal covariate effects can be difficult. This led Heagerty (1999, 2002) to propose models for longitudinal binary data in which a logistic regression is first used to explain the average marginal response. The model is then completed by introducing a conditional regression that allows for the longitudinal, within‐subject, dependence, either via random effects or regressing on previous responses. In this paper, the authors extend the work of Heagerty to handle multivariate longitudinal binary response data using a triple of regression models that directly model the marginal mean response while taking into account dependence across time and across responses. Markov Chain Monte Carlo methods are used for inference. Data from the Iowa Youth and Families Project are used to illustrate the methods.  相似文献   

11.
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semi-parametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coefficients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coefficients and the model space, they propose a semi-automatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology.  相似文献   

12.
In this article, we investigate the monotonicity of the density, failure rate, and mean residual life functions of the log-exponential inverse Gaussian distribution. It turns out that, in this case, the monotonicity of the density, failure rate, and mean residual life functions take different forms depending on the range of the parameters. Maximum likelihood estimators of the critical points of the density, failure rate, and mean residual life functions of the model are evaluated using Monte Carlo simulations. An example of a published data set is used to illustrate the estimation of the critical points.  相似文献   

13.
Various types of failure, censored and accelerated life tests, are commonly employed for life testing in some manufacturing industries and products that are highly reliable. In this article, we consider the tampered failure rate model as one of such types that relate the distribution under use condition to the distribution under accelerated condition. It is assumed that the lifetimes of products under use condition have generalized Pareto distribution as a lifetime model. Some estimation methods such as graphical, moments, probability weighted moments, and maximum likelihood estimation methods for the parameters are discussed based on progressively type-I censored data. The determination of optimal stress change time is discussed under two different criteria of optimality. Finally, a Monte Carlo simulation study is carried out to examine the performance of the estimation methods and the optimality criteria.  相似文献   

14.
In this paper,we propose a class of general partially linear varying-coefficient transformation models for ranking data. In the models, the functional coefficients are viewed as nuisance parameters and approximated by B-spline smoothing approximation technique. The B-spline coefficients and regression parameters are estimated by rank-based maximum marginal likelihood method. The three-stage Monte Carlo Markov Chain stochastic approximation algorithm based on ranking data is used to compute estimates and the corresponding variances for all the B-spline coefficients and regression parameters. Through three simulation studies and a Hong Kong horse racing data application, the proposed procedure is illustrated to be accurate, stable and practical.  相似文献   

15.
This paper deals with the analysis of multivariate survival data from a Bayesian perspective using Markov-chain Monte Carlo methods. The Metropolis along with the Gibbs algorithm is used to calculate some of the marginal posterior distributions. A multivariate survival model is proposed, since survival times within the same group are correlated as a consequence of a frailty random block effect. The conditional proportional-hazards model of Clayton and Cuzick is used with a martingale structured prior process (Arjas and Gasbarra) for the discretized baseline hazard. Besides the calculation of the marginal posterior distributions of the parameters of interest, this paper presents some Bayesian EDA diagnostic techniques to detect model adequacy. The methodology is exemplified with kidney infection data where the times to infections within the same patients are expected to be correlated.  相似文献   

16.
The properties of a method of estimating the ratio of parameters for ordered categorical response regression models are discussed. If the link function relating the response variable to the linear combination of covariates is unknown then it is only possible to estimate the ratio of regression parameters. This ratio of parameters has a substitutability or relative importance interpretation.

The maximum likelihood estimate of the ratio of parameters, assuming a logistic function (McCullagh, 1980), is found to have very small bias for a wide variety of true link functions. Further it is shown using Monte Carlo simulations that this maximum likelihood estimate, has good coverage properties, even if the link function is incorrectly specified. It is demonstrated that combining adjacent categories to make the response binary can result in an analysis which is appreciably less efficient. The size of the efficiency loss on, among other factors, the marginal distribution in the ordered categories  相似文献   

17.
Estimation of the population mean under the regression model with random components is considered. Conditions under which the random components regression estimator is design consistent are given. It is shown that consistency holds when incorrect values are used for the variance components. The regression estimator constructed with model parameters that differ considerably from the true parameters performed well in a Monte Carlo study. Variance estimators for the regression predictor are suggested. A variance estimator appropriate for estimators constructed with a biased estimator for the between-group variance component performed well in the Monte Carlo study.  相似文献   

18.
We use a class of parametric counting process regression models that are commonly employed in the analysis of failure time data to formulate the subject-specific capture probabilities for removal and recapture studies conducted in continuous time. We estimate the regression parameters by modifying the conventional likelihood score function for left-truncated and right-censored data to accommodate an unknown population size and missing covariates on uncaptured subjects, and we subsequently estimate the population size by a martingale-based estimating function. The resultant estimators for the regression parameters and population size are consistent and asymptotically normal under appropriate regularity conditions. We assess the small sample properties of the proposed estimators through Monte Carlo simulation and we present an application to a bird banding exercise.  相似文献   

19.
Bayesian estimation for the two unknown parameters and the reliability function of the exponentiated Weibull model are obtained based on generalized order statistics. Markov chain Monte Carlo (MCMC) methods are considered to compute the Bayes estimates of the target parameters. Our computations are based on the balanced loss function which contains the symmetric and asymmetric loss functions as special cases. The results have been specialized to the progressively Type-II censored data and upper record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.  相似文献   

20.
Summary.  Model selection for marginal regression analysis of longitudinal data is challenging owing to the presence of correlation and the difficulty of specifying the full likelihood, particularly for correlated categorical data. The paper introduces a novel Bayesian information criterion type model selection procedure based on the quadratic inference function, which does not require the full likelihood or quasi-likelihood. With probability approaching 1, the criterion selects the most parsimonious correct model. Although a working correlation matrix is assumed, there is no need to estimate the nuisance parameters in the working correlation matrix; moreover, the model selection procedure is robust against the misspecification of the working correlation matrix. The criterion proposed can also be used to construct a data-driven Neyman smooth test for checking the goodness of fit of a postulated model. This test is especially useful and often yields much higher power in situations where the classical directional test behaves poorly. The finite sample performance of the model selection and model checking procedures is demonstrated through Monte Carlo studies and analysis of a clinical trial data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号