首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Summary.  We develop a new class of time continuous autoregressive fractionally integrated moving average (CARFIMA) models which are useful for modelling regularly spaced and irregu-larly spaced discrete time long memory data. We derive the autocovariance function of a stationary CARFIMA model and study maximum likelihood estimation of a regression model with CARFIMA errors, based on discrete time data and via the innovations algorithm. It is shown that the maximum likelihood estimator is asymptotically normal, and its finite sample properties are studied through simulation. The efficacy of the approach proposed is demonstrated with a data set from an environmental study.  相似文献   

2.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

3.
The generalized exponential is the most commonly used distribution for analyzing lifetime data. This distribution has several desirable properties and it can be used quite effectively to analyse several skewed life time data. The main aim of this paper is to introduce absolutely continuous bivariate generalized exponential distribution using the method of Block and Basu (1974). In fact, the Block and Basu exponential distribution will be extended to the generalized exponential distribution. We call the new proposed model as the Block and Basu bivariate generalized exponential distribution, then, discuss its different properties. In this case the joint probability distribution function and the joint cumulative distribution function can be expressed in compact forms. The model has four unknown parameters and the maximum likelihood estimators cannot be obtained in explicit form. To compute the maximum likelihood estimators directly, one needs to solve a four dimensional optimization problem. The EM algorithm has been proposed to compute the maximum likelihood estimations of the unknown parameters. One data analysis is provided for illustrative purposes. Finally, we propose some generalizations of the proposed model and compare their models with each other.  相似文献   

4.
This paper examines the formation of maximum likelihood estimates of cell means in analysis of variance problems for cells with missing observations. Methods of estimating the means for missing cells has a long history which includes iterative maximum likelihood techniques, approximation techniques and ad hoc techniques. The use of the EM algorithm to form maximum likelihood estimates has resolved most of the issues associated with this problem. Implementation of the EM algorithm entails specification of a reduced model. As demonstrated in this paper, when there are several missing cells, it is possible to specify a reduced model that results in an unidentifiable likelihood. The EM algorithm in this case does not converge, although the slow divergence may often be mistaken by the unwary as convergence. This paper presents a simple matrix method of determining whether or not the reduced model results in an identifiable likelihood, and consequently in an EM algorithm that converges. We also show the EM algorithm in this case to be equivalent to a method which yields a closed form solution.  相似文献   

5.
This paper deals with the prediction of time series with missing data using an alternative formulation for Holt's model with additive errors. This formulation simplifies both the calculus of maximum likelihood estimators of all the unknowns in the model and the calculus of point forecasts. In the presence of missing data, the EM algorithm is used to obtain maximum likelihood estimates and point forecasts. Based on this application we propose a leave-one-out algorithm for the data transformation selection problem which allows us to analyse Holt's model with multiplicative errors. Some numerical results show the performance of these procedures for obtaining robust forecasts.  相似文献   

6.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

7.
Block and Basu bivariate exponential distribution is one of the most popular absolutely continuous bivariate distributions. Extensive work has been done on the Block and Basu bivariate exponential model over the past several decades. Interestingly it is observed that the Block and Basu bivariate exponential model can be extended to the Weibull model also. We call this new model as the Block and Basu bivariate Weibull model. We consider different properties of the Block and Basu bivariate Weibull model. The Block and Basu bivariate Weibull model has four unknown parameters and the maximum likelihood estimators cannot be obtained in closed form. To compute the maximum likelihood estimators directly, one needs to solve a four dimensional optimization problem. We propose to use the EM algorithm for computing the maximum likelihood estimators of the unknown parameters. The proposed EM algorithm can be carried out by solving one non-linear equation at each EM step. Our method can be also used to compute the maximum likelihood estimators for the Block and Basu bivariate exponential model. One data analysis has been preformed for illustrative purpose.  相似文献   

8.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

9.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

10.
Probabilistic Principal Component Analysis   总被引:2,自引:0,他引:2  
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based on a probability model. We demonstrate how the principal axes of a set of observed data vectors may be determined through maximum likelihood estimation of parameters in a latent variable model that is closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss, with illustrative examples, the advantages conveyed by this probabilistic approach to PCA.  相似文献   

11.
A maximum likelihood estimation procedure is presented for the frailty model. The procedure is based on a stochastic Expectation Maximization algorithm which converges quickly to the maximum likelihood estimate. The usual expectation step is replaced by a stochastic approximation of the complete log-likelihood using simulated values of unobserved frailties whereas the maximization step follows the same lines as those of the Expectation Maximization algorithm. The procedure allows to obtain at the same time estimations of the marginal likelihood and of the observed Fisher information matrix. Moreover, this stochastic Expectation Maximization algorithm requires less computation time. A wide variety of multivariate frailty models without any assumption on the covariance structure can be studied. To illustrate this procedure, a Gaussian frailty model with two frailty terms is introduced. The numerical results based on simulated data and on real bladder cancer data are more accurate than those obtained by using the Expectation Maximization Laplace algorithm and the Monte-Carlo Expectation Maximization one. Finally, since frailty models are used in many fields such as ecology, biology, economy, …, the proposed algorithm has a wide spectrum of applications.  相似文献   

12.
Latent variable models are widely used for jointly modeling of mixed data including nominal, ordinal, count and continuous data. In this paper, we consider a latent variable model for jointly modeling relationships between mixed binary, count and continuous variables with some observed covariates. We assume that, given a latent variable, mixed variables of interest are independent and count and continuous variables have Poisson distribution and normal distribution, respectively. As such data may be extracted from different subpopulations, consideration of an unobserved heterogeneity has to be taken into account. A mixture distribution is considered (for the distribution of the latent variable) which accounts the heterogeneity. The generalized EM algorithm which uses the Newton–Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. The standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. Analysis of the primary biliary cirrhosis data is presented as an application of the proposed model.  相似文献   

13.
Count data with excess zeros are common in many biomedical and public health applications. The zero-inflated Poisson (ZIP) regression model has been widely used in practice to analyze such data. In this paper, we extend the classical ZIP regression framework to model count time series with excess zeros. A Markov regression model is presented and developed, and the partial likelihood is employed for statistical inference. Partial likelihood inference has been successfully applied in modeling time series where the conditional distribution of the response lies within the exponential family. Extending this approach to ZIP time series poses methodological and theoretical challenges, since the ZIP distribution is a mixture and therefore lies outside the exponential family. In the partial likelihood framework, we develop an EM algorithm to compute the maximum partial likelihood estimator (MPLE). We establish the asymptotic theory of the MPLE under mild regularity conditions and investigate its finite sample behavior in a simulation study. The performances of different partial-likelihood based model selection criteria are compared in the presence of model misspecification. Finally, we present an epidemiological application to illustrate the proposed methodology.  相似文献   

14.
Some traditional life tests result in no or very few failures by the end of test. In such cases, one approach is to do life testing at higher-than-usual stress conditions in order to obtain failures quickly. This paper discusses a k-level step-stress accelerated life test under type I progressive group-censoring with random removals. An exponential failure time distribution with mean life that is a log-linear function of stress and a cumulative exposure model are considered. We derive the maximum likelihood estimators of the model parameters and establish the asymptotic properties of the estimators. We investigate four selection criteria which enable us to obtain the optimum test plans. One is to minimize the asymptotic variance of the maximum likelihood estimator of the logarithm of the mean lifetime at use-condition, and the other three criteria are to maximize the determinant, trace and the smallest eigenvalue of Fisher's information matrix. Some numerical studies are discussed to illustrate the proposed criteria.  相似文献   

15.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.  相似文献   

16.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

17.
In this article, we consider the efficient estimation of the semiparametric transformation model with doubly truncated data. We propose a two-step approach for obtaining the pseudo maximum likelihood estimators (PMLE) of regression parameters. In the first step, the truncation time distribution is estimated by the nonparametric maximum likelihood estimator (Shen, 2010a) when the distribution function K of the truncation time is unspecified or by the conditional maximum likelihood estimator (Bilker and Wang, 1996) when K is parameterized. In the second step, using the pseudo complete-data likelihood function with the estimated distribution of truncation time, we propose expectation–maximization algorithms for obtaining the PMLE. We establish the consistency of the PMLE. The simulation study indicates that the PMLE performs well in finite samples. The proposed method is illustrated using an AIDS data set.  相似文献   

18.
We derive an identity for nonparametric maximum likelihood estimators (NPMLE) and regularized MLEs in censored data models which expresses the standardized maximum likelihood estimator in terms of the standardized empirical process. This identity provides an effective starting point in proving both consistency and efficiency of NPMLE and regularized MLE. The identity and corresponding method for proving efficiency is illustrated for the NPMLE in the univariate right-censored data model, the regularized MLE in the current status data model and for an implicit NPMLE based on a mixture of right-censored and current status data. Furthermore, a general algorithm for estimation of the limiting variance of the NPMLE is provided. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   

20.
A maximum likelihood methodology for the parameters of models with an intractable likelihood is introduced. We produce a likelihood-free version of the stochastic approximation expectation-maximization (SAEM) algorithm to maximize the likelihood function of model parameters. While SAEM is best suited for models having a tractable “complete likelihood” function, its application to moderately complex models is a difficult or even impossible task. We show how to construct a likelihood-free version of SAEM by using the “synthetic likelihood” paradigm. Our method is completely plug-and-play, requires almost no tuning and can be applied to both static and dynamic models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号