首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
ABSTRACT

The generalized case-cohort design is widely used in large cohort studies to reduce the cost and improve the efficiency. Taking prior information of parameters into consideration in modeling process can further raise the inference efficiency. In this paper, we consider fitting proportional hazards model with constraints for generalized case-cohort studies. We establish a working likelihood function for the estimation of model parameters. The asymptotic properties of the proposed estimator are derived via the Karush-Kuhn-Tucker conditions, and their finite properties are assessed by simulation studies. A modified minorization-maximization algorithm is developed for the numerical calculation of the constrained estimator. An application to a Wilms tumor study demonstrates the utility of the proposed method in practice.  相似文献   

2.
In modeling count data collected from manufacturing processes, economic series, disease outbreaks and ecological surveys, there are usually a relatively large or small number of zeros compared to positive counts. Such low or high frequencies of zero counts often require the use of underdispersed or overdispersed probability models for the underlying data generating mechanism. The commonly used models such as generalized or zero-inflated Poisson distributions are parametric and can usually account for only the overdispersion, but such distributions are often found to be inadequate in modeling underdispersion because of the need for awkward parameter or support restrictions. This article introduces a flexible class of semiparametric zero-altered models which account for both underdispersion and overdispersion and includes other familiar models such as those mentioned above as special cases. Consistency and asymptotic normality of the estimator of the dispersion parameter are derived under general conditions. Numerical support for the performance of the proposed method of inference is presented for the case of common discrete distributions.  相似文献   

3.
ABSTRACT

We investigated the empirical likelihood inference approach under a general class of semiparametric hazards regression models with survival data subject to right-censoring. An empirical likelihood ratio for the full 2p regression parameters involved in the model is obtained. We showed that it converged weakly to a random variable which could be written as a weighted sum of 2p independent chi-squared variables with one degree of freedom. Using this, we could construct a confidence region for parameters. We also suggested an adjusted version for the preceding statistic, whose limit followed a standard chi-squared distribution with 2p degrees of freedom.  相似文献   

4.
On the basis of Kullback-Leibler discrimination information, and of discrimination measures introduced by Ebrahimi and Kirmani (1996a) and by Di Crescenzo and Longobardi (2004), we propose a measure of discrepancy between double truncated distributions. Some properties of this measure are studied and some mistakes in the preceding literature are corrected.  相似文献   

5.
This paper studies the estimation in the proportional odds model based on randomly truncated data. The proposed estimators for the regression coefficients include a class of minimum distance estimators defined through weighted empirical odds function. We have investigated the asymptotic properties like the consistency and the limiting distribution of the proposed estimators under mild conditions. The finite sample properties were investigated through simulation study making comparison of some of the estimators in the class. We conclude with an illustration of our proposed method to a well-known AIDS data.  相似文献   

6.
In this paper, we introduce new parametric and semiparametric regression techniques for a recurrent event process subject to random right censoring. We develop models for the cumulative mean function and provide asymptotically normal estimators. Our semiparametric model which relies on a single-index assumption can be seen as a dimension reduction technique that, contrary to a fully nonparametric approach, is not stroke by the curse of dimensionality when the number of covariates is high. We discuss data-driven techniques to choose the parameters involved in the estimation procedures and provide a simulation study to support our theoretical results.  相似文献   

7.
ABSTRACT

Competing risks data are common in medical research in which lifetime of individuals can be classified in terms of causes of failure. In survival or reliability studies, it is common that the patients (objects) are subjected to both left censoring and right censoring, which is refereed as double censoring. The analysis of doubly censored competing risks data in presence of covariates is the objective of this study. We propose a proportional hazards model for the analysis of doubly censored competing risks data, using the hazard rate functions of Gray (1988 Gray, R.J. (1988). A class of k-sample tests for comparing the cumulative incidence of a competing risk. Ann. Statist. 16:11411154.[Crossref], [Web of Science ®] [Google Scholar]), while focusing upon one major cause of failure. We derive estimators for regression parameter vector and cumulative baseline cause specific hazard rate function. Asymptotic properties of the estimators are discussed. A simulation study is conducted to assess the finite sample behavior of the proposed estimators. We illustrate the method using a real life doubly censored competing risks data.  相似文献   

8.
The maximum likelihood estimator (MLE) for the survival function STunder the proportional hazards model of censorship is derived and shown to differ from the Abdushukurov-Cheng-Lin estimator when the class of allowable distributions includes all continuous and discrete distributions. The estimators are compared via an example. The MLE is calculated using a Newton-Raphson iterative procedure and implemented via a FORTRAN algorithm.  相似文献   

9.
The authors show how the genetic effect of a quantitative trait locus can be estimated by a nonparametric empirical likelihood method when the phenotype distributions are completely unspecified. They use an empirical likelihood ratio statistic for testing the genetic effect and obtaining confidence intervals. In addition to studying the asymptotic properties of these procedures, the authors present simulation results and illustrate their approach with a study on breast cancer resistance genes.  相似文献   

10.
In the analysis of censored survival data Cox proportional hazards model (1972) is extremely popular among the practitioners. However, in many real-life situations the proportionality of the hazard ratios does not seem to be an appropriate assumption. To overcome such a problem, we consider a class of nonproportional hazards models known as generalized odds-rate class of regression models. The class is general enough to include several commonly used models, such as proportional hazards model, proportional odds model, and accelerated life time model. The theoretical and computational properties of these models have been re-examined. The propriety of the posterior has been established under some mild conditions. A simulation study is conducted and a detailed analysis of the data from a prostate cancer study is presented to further illustrate the proposed methodology.  相似文献   

11.
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound.  相似文献   

12.
Costs associated with the evaluation of biomarkers can restrict the number of relevant biological samples to be measured. This common problem has been dealt with extensively in the epidemiologic and biostatistical literature that proposes to apply different cost-efficient procedures, including pooling and random sampling strategies. The pooling design has been widely addressed as a very efficient sampling method under certain parametric assumptions regarding data distribution. When cost is not a main factor in the evaluation of biomarkers but measurement is subject to a limit of detection, a common instrument limitation on the measurement process, the pooling design can partially overcome this instrumental limitation. In certain situations, the pooling design can provide data that is less informative than a simple random sample; however this is not always the case. Pooled-data-based nonparametric inferences have not been well addressed in the literature. In this article, a distribution-free method based on the empirical likelihood technique is proposed to substitute the traditional parametric-likelihood approach, providing the true coverage, confidence interval estimation and powerful tests based on data obtained after the cost-efficient designs. We also consider several nonparametric tests to compare with the proposed procedure. We examine the proposed methodology via a broad Monte Carlo study and a real data example.  相似文献   

13.
Abstract.  This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proportional model is, however, that there is no simple likelihood to work with. We here study a least squares criterion with desirable properties and show how this criterion can be interpreted as a prediction error. Given this criterion, we define ridge and Lasso estimators as well as an adaptive Lasso and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare the Dantzig and adaptive Lasso for a moderate to small number of covariates. The methods are applied to a breast cancer data set with gene expression recordings and to the primary biliary cirrhosis clinical data.  相似文献   

14.
This paper discusses regression analysis of clustered current status data under semiparametric additive hazards models. In particular, we consider the situation when cluster sizes can be informative about correlated failure times from the same cluster. To address the problem, we present estimating equation-based estimation procedures and establish asymptotic properties of the resulting estimates. Finite sample performance of the proposed method is assessed through an extensive simulation study, which indicates the procedure works well. The method is applied to a motivating data set from a lung tumorigenicity study.  相似文献   

15.
Procedures for estimating the parameters of the general class of semiparametric models for recurrent events proposed by Peña and Hollander [(2004). Models for recurrent events in reliability and survival analysis. In: Soyer R., Mazzuchi T., Singpurwalla N. (Eds.), Mathematical Reliability: An Expository Perspective. Kluwer Academic Publishers, Dordrecht, pp. 105–123 (Chapter 6)] are developed. This class of models incorporates an effective age function encoding the effect of changes after each event occurrence such as the impact of an intervention, it models the impact of accumulating event occurrences on the unit, it admits a link function in which the effect of possibly time-dependent covariates are incorporated, and it allows the incorporation of unobservable frailty components which induce dependencies among the inter-event times for each unit. The estimation procedures are semiparametric in that a baseline hazard function is nonparametrically specified. The sampling distribution properties of the estimators are examined through a simulation study, and the consequences of mis-specifying the model are analyzed. The results indicate that the flexibility of this general class of models provides a safeguard for analyzing recurrent event data, even data possibly arising from a frailty-less mechanism. The estimation procedures are applied to real data sets arising in the biomedical and public health settings, as well as from reliability and engineering situations. In particular, the procedures are applied to a data set pertaining to times to recurrence of bladder cancer and the results of the analysis are compared to those obtained using three methods of analyzing recurrent event data.  相似文献   

16.
This paper addresses problem of testing whether an individual covariate in the Cox model has a proportional (i.e., time-constant) effect on the hazard. Two existing methods are considered: one is based on the component of the score process, and the other is a Neyman type smooth test. Simulations show that, when the model contains both proportional and nonproportional covariates, these methods are not reliable tools for discrimination. A simple yet effective solution is proposed based on smooth modeling of the effects of the covariates not in focus.  相似文献   

17.
The testing of the stratum effects in the Cox model is an important and commonly asked question in medical research as well as in many other fields. In this paper, we will discuss the problem where one observes interval-censored failure time data and generalize the procedure given in Sun and Yang (2000 Sun, J., and I. Yang. 2000. Nonparametric test for stratum effects in the cox model. Lifetime Data Analysis 6:32130.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) for right-censored data. The asymptotic distribution of the new test statistic is established and the simulation study conducted for the evaluation of the finite sample properties of the method suggests that the generalized procedure seems to work well for practical situations. An application is provided.  相似文献   

18.
Previous studies of the apparent influence of daylight level and hour changes on the incidence of road casualties are reviewed and refined, by analysis of official databases for Great Britain (1969–1973 and 1985–1994) and the USA (1991–1995). New statistical methods, based on precisely computed altitudes of the sun for each accident location, are used to model casualty frequencies aggregated by week and hour of day, and locally evaluated associations between individual casualty incidence and solar altitude. Estimates of the altitude factor are interpreted causally to give counterfactual estimates of the effect of different clock time schedules on countrywide casualty numbers.  相似文献   

19.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

20.
Median survival times and their associated confidence intervals are often used to summarize the survival outcome of a group of patients in clinical trials with failure-time endpoints. Although there is an extensive literature on this topic for the case in which the patients come from a homogeneous population, few papers have dealt with the case in which covariates are present as in the proportional hazards model. In this paper we propose a new approach to this problem and demonstrate its advantages over existing methods, not only for the proportional hazards model but also for the widely studied cases where covariates are absent and where there is no censoring. As an illustration, we apply it to the Stanford Heart Transplant data. Asymptotic theory and simulation studies show that the proposed method indeed yields confidence intervals and bands with accurate coverage errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号