首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  A graph theoretical approach is employed to describe the support set of the nonparametric maximum likelihood estimator for the cumulative distribution function given interval-censored and left-truncated data. A necessary and sufficient condition for the existence of a nonparametric maximum likelihood estimator is then derived. Two previously analysed data sets are revisited.  相似文献   

2.
In this note, we consider data subjected to middle censoring where the variable of interest becomes unobservable when it falls within an interval of censorship. We demonstrate that the nonparametric maximum likelihood estimator (NPMLE) of distribution function can be obtained by using Turnbull's (1976) EM algorithm or self-consistent estimating equation (Jammalamadaka and Mangalam, 2003) with an initial estimator which puts mass only on the innermost intervals. The consistency of the NPMLE can be established based on the asymptotic properties of self-consistent estimators (SCE) with mixed interval-censored data ( [Yu et al., 2000] and [Yu et al., 2001]).  相似文献   

3.
In earlier work, Kirchner [An estimation procedure for the Hawkes process. Quant Financ. 2017;17(4):571–595], we introduced a nonparametric estimation method for the Hawkes point process. In this paper, we present a simulation study that compares this specific nonparametric method to maximum-likelihood estimation. We find that the standard deviations of both estimation methods decrease as power-laws in the sample size. Moreover, the standard deviations are proportional. For example, for a specific Hawkes model, the standard deviation of the branching coefficient estimate is roughly 20% larger than for MLE – over all sample sizes considered. This factor becomes smaller when the true underlying branching coefficient becomes larger. In terms of runtime, our method clearly outperforms MLE. The present bias of our method can be well explained and controlled. As an incidental finding, we see that also MLE estimates seem to be significantly biased when the underlying Hawkes model is near criticality. This asks for a more rigorous analysis of the Hawkes likelihood and its optimization.  相似文献   

4.
5.
The most widely used model for multidimensional survival analysis is the Cox model. This model is semi-parametric, since its hazard function is the product of an unspecified baseline hazard, and a parametric functional form relating the hazard and the covariates. We consider a more flexible and fully nonparametric proportional hazards model, where the functional form of the covariates effect is left unspecified. In this model, estimation is based on the maximum likelihood method. Results obtained from a Monte-Carlo experiment and from real data are presented. Finally, the advantages and the limitations of the approacha are discussed.  相似文献   

6.
In this paper, we study an algorithm to compute the non-parametric maximum likelihood estimator of stochastically ordered survival functions from case 2 interval-censored data. The algorithm, simply denoted by SQP (sequential quadratic programming), re-parameterizes the likelihood function to make the order constraints as a set of linear constraints, approximates the log-likelihood function as a quadratic function, and updates the estimate by solving a quadratic programming. We particularly consider two stochastic orderings, simple and uniform orderings, although the algorithm can also be applied to many other stochastic orderings. We illustrate the algorithm using the breast cancer data reported in Finkelstein and Wolfe (1985 Finkelstein, D. M., and R. A. Wolfe. 1985. A semiparametric model for regression analysis of interval-censored failure time data. Biometrics 41:93345. [Google Scholar]).  相似文献   

7.
8.
We propose a new class of semiparametric regression models based on a multiplicative frailty assumption with a discrete frailty, which may account for cured subgroup in population. The cure model framework is then recast as a problem with a transformation model. The proposed models can explain a broad range of nonproportional hazards structures along with a cured proportion. An efficient and simple algorithm based on the martingale process is developed to locate the nonparametric maximum likelihood estimator. Unlike existing expectation-maximization based methods, our approach directly maximizes a nonparametric likelihood function, and the calculation of consistent variance estimates is immediate. The proposed method is useful for resolving identifiability features embedded in semiparametric cure models. Simulation studies are presented to demonstrate the finite sample properties of the proposed method. A case study of stage III soft-tissue sarcoma is given as an illustration.  相似文献   

9.
Likelihood functions are the foundation of many statistical methodologies in classical data analysis. Likelihood functions for symbolic data must be introduced before these classical methods can be extended to the analysis of symbolic data. In this paper, we propose the likelihood function for symbolic data and illustrate its applications by finding the maximum likelihood estimators for the mean and the variance of three common types of symbolic-valued random variables: interval-valued, histogram-valued and triangular-distribution-valued variables.  相似文献   

10.
This research focuses on the estimation of tumor incidence rates from long-term animal studies which incorporate interim sacrifices. A nonparametric stochastic model is described with transition rates between states corresponding to the tumor incidence rate, the overall death rate, and the death rate for tumor-free animals. Exact analytic solutions for the maximum likelihood estimators of the hazard rates are presented, and their application to data from a long-term animal study is illustrated by an example. Unlike many common methods for estimation and comparison of tumor incidence rates among treatment groups, the estimators derived in this paper require no assumptions regarding tumor lethality or treatment lethality. The small sample operating characteristics of these estimators are evaluated using Monte Carlo simulation studies.  相似文献   

11.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

12.
Lifetime Data Analysis - This work was motivated by observational studies in pregnancy with spontaneous abortion (SAB) as outcome. Clearly some women experience the SAB event but the rest do not....  相似文献   

13.
14.
15.
This paper proposes a class of nonparametric estimators for the bivariate survival function estimation under both random truncation and random censoring. In practice, the pair of random variables under consideration may have certain parametric relationship. The proposed class of nonparametric estimators uses such parametric information via a data transformation approach and thus provides more accurate estimates than existing methods without using such information. The large sample properties of the new class of estimators and a general guidance of how to find a good data transformation are given. The proposed method is also justified via a simulation study and an application on an economic data set.  相似文献   

16.
In this paper, we propose a defective model induced by a frailty term for modeling the proportion of cured. Unlike most of the cure rate models, defective models have advantage of modeling the cure rate without adding any extra parameter in model. The introduction of an unobserved heterogeneity among individuals has bring advantages for the estimated model. The influence of unobserved covariates is incorporated using a proportional hazard model. The frailty term assumed to follow a gamma distribution is introduced on the hazard rate to control the unobservable heterogeneity of the patients. We assume that the baseline distribution follows a Gompertz and inverse Gaussian defective distributions. Thus we propose and discuss two defective distributions: the defective gamma-Gompertz and gamma-inverse Gaussian regression models. Simulation studies are performed to verify the asymptotic properties of the maximum likelihood estimator. Lastly, in order to illustrate the proposed model, we present three applications in real data sets, in which one of them we are using for the first time, related to a study about breast cancer in the A.C.Camargo Cancer Center, São Paulo, Brazil.  相似文献   

17.
Time-to-event data such as time to death are broadly used in medical research and drug development to understand the efficacy of a therapeutic. For time-to-event data, right censoring (data only observed up to a certain point of time) is common and easy to recognize. Methods that use right censored data, such as the Kaplan–Meier estimator and the Cox proportional hazard model, are well established. Time-to-event data can also be left truncated, which arises when patients are excluded from the sample because their events occur before a specific milestone, potentially resulting in an immortal time bias. For example, in a study evaluating the association between biomarker status and overall survival, patients who did not live long enough to receive a genomic test were not observed in the study. Left truncation causes selection bias and often leads to an overestimate of survival time. In this tutorial, we used a nationwide electronic health record-derived de-identified database to demonstrate how to analyze left truncated and right censored data without bias using example code from SAS and R.  相似文献   

18.
19.
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and the survival models include Cox models and AFT models. We find that the full likelihood method outperforms the two-step methods for various joint models, but it can be computationally challenging when the dimension of the random effects in the longitudinal model is not small. We thus propose an approximate joint likelihood method which is computationally efficient. We find that the proposed approximation method performs well in the joint model context, and it performs better for more “continuous” longitudinal data. Finally, a real AIDS data example shows that patients with higher initial viral load or lower initial CD4 are more likely to drop out earlier during an anti-HIV treatment.  相似文献   

20.
An alternative technique to current methods for constructing a prediction function for the normal linear regression model is proposed based on the concept of maximum likelihood. The form of this prediction function is evaluated and normalized to produce a multivariate Student's t-density. Consistency properties are established under regularity conditions, and an empirical comparison, based on the Kullback-Leibler information divergence, is made with some other prediction functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号