首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model with right-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.  相似文献   

2.
The Type-II progressive censoring scheme has become very popular for analyzing lifetime data in reliability and survival analysis. However, no published papers address parameter estimation under progressive Type-II censoring for the mixed exponential distribution (MED), which is an important model for reliability and survival analysis. This is the problem that we address in this paper. It is noted that maximum likelihood estimation of unknown parameters cannot be obtained in closed form due to the complicated log-likelihood function. We solve this problem by using the EM algorithm. Finally, we obtain closed form estimates of the model. The proposed methods are illustrated by both some simulations and a case analysis.  相似文献   

3.
We discuss the maximum likelihood estimates (MLEs) of the parameters of the log-gamma distribution based on progressively Type-II censored samples. We use the profile likelihood approach to tackle the problem of the estimation of the shape parameter κ. We derive approximate maximum likelihood estimators of the parameters μ and σ and use them as initial values in the determination of the MLEs through the Newton–Raphson method. Next, we discuss the EM algorithm and propose a modified EM algorithm for the determination of the MLEs. A simulation study is conducted to evaluate the bias and mean square error of these estimators and examine their behavior as the progressive censoring scheme and the shape parameter vary. We also discuss the interval estimation of the parameters μ and σ and show that the intervals based on the asymptotic normality of MLEs have very poor probability coverages for small values of m. Finally, we present two examples to illustrate all the methods of inference discussed in this paper.  相似文献   

4.
The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero.  相似文献   

5.
Summary.  We describe quantum tomography as an inverse statistical problem in which the quantum state of a light beam is the unknown parameter and the data are given by results of measurements performed on identical quantum systems. The state can be represented as an infinite dimensional density matrix or equivalently as a density on the plane called the Wigner function. We present consistency results for pattern function projection estimators and for sieve maximum likelihood estimators for both the density matrix of the quantum state and its Wigner function. We illustrate the performance of the estimators on simulated data. An EM algorithm is proposed for practical implementation. There remain many open problems, e.g. rates of convergence, adaptation and studying other estimators; a main purpose of the paper is to bring these to the attention of the statistical community.  相似文献   

6.
The authors propose a reduction technique and versions of the EM algorithm and the vertex exchange method to perform constrained nonparametric maximum likelihood estimation of the cumulative distribution function given interval censored data. The constrained vertex exchange method can be used in practice to produce likelihood intervals for the cumulative distribution function. In particular, the authors show how to produce a confidence interval with known asymptotic coverage for the survival function given current status data.  相似文献   

7.
最大后验估计(MAPE)和最大似然估计(MLE)都是重要的参数点估计方法。在介绍一般分层线性模型(HLM)MAPE方法的基础上,给出这种方法的期望最大化算法(EM)的具体步骤,运用对数似然函数的二阶导数推导了MAPE估计的方差估计量。同时运用数据模拟比较了EM算法下的MAPE和MLE。对于固定效应的估计,两种方法得到的估计量是一致的。当组数较少时,EM计算的MAPE的方差协方差成分比MLE的更靠近真实值,而且MAPE的迭代次数明显小于MLE。  相似文献   

8.
In this paper, a new compounding distribution, named the Weibull–Poisson distribution is introduced. The shape of failure rate function of the new compounding distribution is flexible, it can be decreasing, increasing, upside-down bathtub-shaped or unimodal. A comprehensive mathematical treatment of the proposed distribution and expressions of its density, cumulative distribution function, survival function, failure rate function, the kth raw moment and quantiles are provided. Maximum likelihood method using EM algorithm is developed for parameter estimation. Asymptotic properties of the maximum likelihood estimates are discussed, and intensive simulation studies are conducted for evaluating the performance of parameter estimation. The use of the proposed distribution is illustrated with examples.  相似文献   

9.
In this article, we consider a competing cause scenario and assume the wider family of Conway–Maxwell–Poisson (COM–Poisson) distribution to model the number of competing causes. Assuming the type of the data to be interval censored, the main contribution is in developing the steps of the expectation maximization (EM) algorithm to determine the maximum likelihood estimates (MLEs) of the model parameters. A profile likelihood approach within the EM framework is proposed to estimate the COM–Poisson shape parameter. An extensive simulation study is conducted to evaluate the performance of the proposed EM algorithm. Model selection within the wider class of COM–Poisson distribution is carried out using likelihood ratio test and information-based criteria. A study to demonstrate the effect of model mis-specification is also carried out. Finally, the proposed estimation method is applied to a data on smoking cessation and a detailed analysis of the obtained results is presented.  相似文献   

10.
Various solutions to the parameter estimation problem of a recently introduced multivariate Pareto distribution are developed and exemplified numerically. Namely, a density of the aforementioned multivariate Pareto distribution with respect to a dominating measure, rather than the corresponding Lebesgue measure, is specified and then employed to investigate the maximum likelihood estimation (MLE) approach. Also, in an attempt to fully enjoy the common shock origins of the multivariate model of interest, an adapted variant of the expectation-maximization (EM) algorithm is formulated and studied. The method of moments is discussed as a convenient way to obtain starting values for the numerical optimization procedures associated with the MLE and EM methods.  相似文献   

11.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

12.
In this paper, we consider the statistical inference for the success probability in the case of start-up demonstration tests in which rejection of units is possible when a pre-fixed number of failures is observed before the required number of consecutive successes are achieved for acceptance of the unit. Since the expected value of the stopping time is not a monotone function of the unknown parameter, the method of moments is not useful in this situation. Therefore, we discuss two estimation methods for the success probability: (1) the maximum likelihood estimation (MLE) via the expectation-maximization (EM) algorithm and (2) Bayesian estimation with a beta prior. We examine the small-sample properties of the MLE and Bayesian estimator. Finally, we present an example to illustrate the method of inference discussed here.  相似文献   

13.
We propose a nonlinear mixed-effects framework to jointly model longitudinal and repeated time-to-event data. A parametric nonlinear mixed-effects model is used for the longitudinal observations and a parametric mixed-effects hazard model for repeated event times. We show the importance for parameter estimation of properly calculating the conditional density of the observations (given the individual parameters) in the presence of interval and/or right censoring. Parameters are estimated by maximizing the exact joint likelihood with the stochastic approximation expectation–maximization algorithm. This workflow for joint models is now implemented in the Monolix software, and illustrated here on five simulated and two real datasets.  相似文献   

14.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

15.
The local maximum likelihood estimate θ^ t of a parameter in a statistical model f ( x , θ) is defined by maximizing a weighted version of the likelihood function which gives more weight to observations in the neighbourhood of t . The paper studies the sense in which f ( t , θ^ t ) is closer to the true distribution g ( t ) than the usual estimate f ( t , θ^) is. Asymptotic results are presented for the case in which the model misspecification becomes vanishingly small as the sample size tends to ∞. In this setting, the relative entropy risk of the local method is better than that of maximum likelihood. The form of optimum weights for the local likelihood is obtained and illustrated for the normal distribution.  相似文献   

16.
Empirical likelihood based variable selection   总被引:1,自引:0,他引:1  
Information criteria form an important class of model/variable selection methods in statistical analysis. Parametric likelihood is a crucial part of these methods. In some applications such as the generalized linear models, the models are only specified by a set of estimating functions. To overcome the non-availability of well defined likelihood function, the information criteria under empirical likelihood are introduced. Under this setup, we successfully solve the existence problem of the profile empirical likelihood due to the over constraint in variable selection problems. The asymptotic properties of the new method are investigated. The new method is shown to be consistent at selecting the variables under mild conditions. Simulation studies find that the proposed method has comparable performance to the parametric information criteria when a suitable parametric model is available, and is superior when the parametric model assumption is violated. A real data set is also used to illustrate the usefulness of the new method.  相似文献   

17.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

18.
The parameter estimation problem for a Markov jump process sampled at equidistant time points is considered here. Unlike the diffusion case where a closed form of the likelihood function is usually unavailable, here an explicit expansion of the likelihood function of the sampled chain is provided. Under suitable ergodicity conditions on the jump process, the consistency and the asymptotic normality of the likelihood estimator are established as the observation period tends to infinity. Simulation experiments are conducted to demonstrate the computational facility of the method.  相似文献   

19.
The EM algorithm is a popular method for parameter estimation in situations where the data can be viewed as being incomplete. As each E-step visits each data point on a given iteration, the EM algorithm requires considerable computation time in its application to large data sets. Two versions, the incremental EM (IEM) algorithm and a sparse version of the EM algorithm, were proposed recently by Neal R.M. and Hinton G.E. in Jordan M.I. (Ed.), Learning in Graphical Models, Kluwer, Dordrecht, 1998, pp. 355–368 to reduce the computational cost of applying the EM algorithm. With the IEM algorithm, the available n observations are divided into B (B n) blocks and the E-step is implemented for only a block of observations at a time before the next M-step is performed. With the sparse version of the EM algorithm for the fitting of mixture models, only those posterior probabilities of component membership of the mixture that are above a specified threshold are updated; the remaining component-posterior probabilities are held fixed. In this paper, simulations are performed to assess the relative performances of the IEM algorithm with various number of blocks and the standard EM algorithm. In particular, we propose a simple rule for choosing the number of blocks with the IEM algorithm. For the IEM algorithm in the extreme case of one observation per block, we provide efficient updating formulas, which avoid the direct calculation of the inverses and determinants of the component-covariance matrices. Moreover, a sparse version of the IEM algorithm (SPIEM) is formulated by combining the sparse E-step of the EM algorithm and the partial E-step of the IEM algorithm. This SPIEM algorithm can further reduce the computation time of the IEM algorithm.  相似文献   

20.
The expectation-maximization (EM) method facilitates computation of max¬imum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservabie complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-stcp. The EM algorithm is most effective when maximizing the iunction Q{0) denned in the F-stnp is easier than maximizing the likelihood function.

The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can he computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of thc standard EM algorithm avoiding computation of conditional expectations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号