首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes.  相似文献   

2.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

3.
We propose here a robust multivariate extension of the bivariate Birnbaum–Saunders (BS) distribution derived by Kundu et al. [Bivariate Birnbaum–Saunders distribution and associated inference. J Multivariate Anal. 2010;101:113–125], based on scale mixtures of normal (SMN) distributions that are used for modelling symmetric data. This resulting multivariate BS-type distribution is an absolutely continuous distribution whose marginal and conditional distributions are of BS-type distribution of Balakrishnan et al. [Estimation in the Birnbaum–Saunders distribution based on scalemixture of normals and the EM algorithm. Stat Oper Res Trans. 2009;33:171–192]. Due to the complexity of the likelihood function, parameter estimation by direct maximization is very difficult to achieve. For this reason, we exploit the nice hierarchical representation of the proposed distribution to propose a fast and accurate EM algorithm for computing the maximum likelihood (ML) estimates of the model parameters. We then evaluate the finite-sample performance of the developed EM algorithm and the asymptotic properties of the ML estimates through empirical experiments. Finally, we illustrate the obtained results with a real data and display the robustness feature of the estimation procedure developed here.  相似文献   

4.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation–maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set.  相似文献   

5.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

6.
Missing data are common in many experiments, including surveys, clinical trials, epidemiological studies, and environmental studies. Unconstrained likelihood inferences for generalized linear models (GLMs) with nonignorable missing covariates have been studied extensively in the literature. However, parameter orderings or constraints may occur naturally in practice, and thus the efficiency of a statistical method may be improved by incorporating parameter constraints into the likelihood function. In this paper, we consider constrained inference for analysing GLMs with nonignorable missing covariates under linear inequality constraints on the model parameters. Specifically, constrained maximum likelihood (ML) estimation is based on the gradient projection expectation maximization approach. Further, we investigate the asymptotic null distribution of the constrained likelihood ratio test (LRT). Simulations study the empirical properties of the constrained ML estimators and LRTs, which demonstrate improved precision of these constrained techniques. An application to contaminant levels in an environmental study is also presented.  相似文献   

7.
In this paper, we consider the estimation reliability in multicomponent stress-strength (MSS) model when both the stress and strengths are drawn from Topp-Leone (TL) distribution. The maximum likelihood (ML) and Bayesian methods are used in the estimation procedure. Bayesian estimates are obtained by using Lindley’s approximation and Gibbs sampling methods, since they cannot be obtained in explicit form in the context of TL. The asymptotic confidence intervals are constructed based on the ML estimators. The Bayesian credible intervals are also constructed using Gibbs sampling. The reliability estimates are compared via an extensive Monte-Carlo simulation study. Finally, a real data set is analysed for illustrative purposes.  相似文献   

8.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences.  相似文献   

9.
In this paper we propose an alternative procedure for estimating the parameters of the beta regression model. This alternative estimation procedure is based on the EM-algorithm. For this, we took advantage of the stochastic representation of the beta random variable through ratio of independent gamma random variables. We present a complete approach based on the EM-algorithm. More specifically, this approach includes point and interval estimations and diagnostic tools for detecting outlying observations. As it will be illustrated in this paper, the EM-algorithm approach provides a better estimation of the precision parameter when compared to the direct maximum likelihood (ML) approach. We present the results of Monte Carlo simulations to compare EM-algorithm and direct ML. Finally, two empirical examples illustrate the full EM-algorithm approach for the beta regression model. This paper contains a Supplementary Material.  相似文献   

10.
ABSTRACT

Censoring frequently occurs in survival analysis but naturally observed lifetimes are not of a large size. Thus, inferences based on the popular maximum likelihood (ML) estimation which often give biased estimates should be corrected in the sense of bias. Here, we investigate the biases of ML estimates under the progressive type-II censoring scheme (pIIcs). We use a method proposed in Efron and Johnstone [Fisher's information in terms of the hazard rate. Technical Report No. 264, January 1987, Stanford University, Stanford, California; 1987] to derive general expressions for bias corrected ML estimates under the pIIcs. This requires derivation of the Fisher information matrix under the pIIcs. As an application, exact expressions are given for bias corrected ML estimates of the Weibull distribution under the pIIcs. The performance of the bias corrected ML estimates and ML estimates are compared by simulations and a real data application.  相似文献   

11.
Pseudo maximum likelihood estimation (PML) for the Dirich-let-multinomial distribution is proposed and examined in this pa-per. The procedure is compared to that based on moments (MM) for its asymptotic relative efficiency (ARE) relative to the maximum likelihood estimate (ML). It is found that PML, requiring much less computational effort than ML and possessing considerably higher ARE than MM, constitutes a good compromise between ML and MM. PML is also found to have very high ARE when an estimate for the scale parameter in the Dirichlet-multinomial distribution is all that is needed.  相似文献   

12.
A new flexible cure rate survival model is developed where the initial number of competing causes of the event of interest (say lesions or altered cells) follows a compound negative binomial (NB) distribution. This model provides a realistic interpretation of the biological mechanism of the event of interest, as it models a destructive process of the initial competing risk factors and records only the damaged portion of the original number of risk factors. Besides, it also accounts for the underlying mechanisms that lead to cure through various latent activation schemes. Our method of estimation exploits maximum likelihood (ML) tools. The methodology is illustrated on a real data set on malignant melanoma, and the finite sample behavior of parameter estimates are explored through simulation studies.  相似文献   

13.
A new approach, is proposed for maximum likelihood (ML) estimation in continuous univariate distributions. The procedure is used primarily to complement the ML method which can fail in situations such as the gamma and Weibull distributions when the shape parameter is, at most, unity. The new approach provides consistent and efficient estimates for all possible values of the shape parameter. Its performance is examined via simulations. Two other, improved, general methods of ML are reported for comparative purposes. The methods are used to estimate the gamma and Weibull distributions using air pollution data from Melbourne. The new ML method is accurate when the shape parameter is less than unity and is also superior to the maximum product of spacings estimation method for the Weibull distribution.  相似文献   

14.
In this paper, we expand a first-order nonlinear autoregressive (AR) model with skew normal innovations. A semiparametric method is proposed to estimate a nonlinear part of model by using the conditional least squares method for parametric estimation and the nonparametric kernel approach for the AR adjustment estimation. Then computational techniques for parameter estimation are carried out by the maximum likelihood (ML) approach using Expectation-Maximization (EM) type optimization and the explicit iterative form for the ML estimators are obtained. Furthermore, in a simulation study and a real application, the accuracy of the proposed methods is verified.  相似文献   

15.
Abstract. The zero‐inflated Poisson regression model is a special case of finite mixture models that is useful for count data containing many zeros. Typically, maximum likelihood (ML) estimation is used for fitting such models. However, it is well known that the ML estimator is highly sensitive to the presence of outliers and can become unstable when mixture components are poorly separated. In this paper, we propose an alternative robust estimation approach, robust expectation‐solution (RES) estimation. We compare the RES approach with an existing robust approach, minimum Hellinger distance (MHD) estimation. Simulation results indicate that both methods improve on ML when outliers are present and/or when the mixture components are poorly separated. However, the RES approach is more efficient in all the scenarios we considered. In addition, the RES method is shown to yield consistent and asymptotically normal estimators and, in contrast to MHD, can be applied quite generally.  相似文献   

16.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

17.
Maximum likelihood (ML) estimation with spatial econometric models is a long-standing problem that finds application in several areas of economic importance. The problem is particularly challenging in the presence of missing data, since there is an implied dependence between all units, irrespective of whether they are observed or not. Out of the several approaches adopted for ML estimation in this context, that of LeSage and Pace [Models for spatially dependent missing data. J Real Estate Financ Econ. 2004;29(2):233–254] stands out as one of the most commonly used with spatial econometric models due to its ability to scale with the number of units. Here, we review their algorithm, and consider several similar alternatives that are also suitable for large datasets. We compare the methods through an extensive empirical study and conclude that, while the approximate approaches are suitable for large sampling ratios, for small sampling ratios the only reliable algorithms are those that yield exact ML or restricted ML estimates.  相似文献   

18.
Log-normal linear models are widely used in applications, and many times it is of interest to predict the response variable or to estimate the mean of the response variable at the original scale for a new set of covariate values. In this paper we consider the problem of efficient estimation of the conditional mean of the response variable at the original scale for log-normal linear models. Several existing estimators are reviewed first, including the maximum likelihood (ML) estimator, the restricted ML (REML) estimator, the uniformly minimum variance unbiased (UMVU) estimator, and a bias-corrected REML estimator. We then propose two estimators that minimize the asymptotic mean squared error and the asymptotic bias, respectively. A parametric bootstrap procedure is also described to obtain confidence intervals for the proposed estimators. Both the new estimators and the bootstrap procedure are very easy to implement. Comparisons of the estimators using simulation studies suggest that our estimators perform better than the existing ones, and the bootstrap procedure yields confidence intervals with good coverage properties. A real application of estimating the mean sediment discharge is used to illustrate the methodology.  相似文献   

19.
A maximum likelihood estimation procedure is presented for the frailty model. The procedure is based on a stochastic Expectation Maximization algorithm which converges quickly to the maximum likelihood estimate. The usual expectation step is replaced by a stochastic approximation of the complete log-likelihood using simulated values of unobserved frailties whereas the maximization step follows the same lines as those of the Expectation Maximization algorithm. The procedure allows to obtain at the same time estimations of the marginal likelihood and of the observed Fisher information matrix. Moreover, this stochastic Expectation Maximization algorithm requires less computation time. A wide variety of multivariate frailty models without any assumption on the covariance structure can be studied. To illustrate this procedure, a Gaussian frailty model with two frailty terms is introduced. The numerical results based on simulated data and on real bladder cancer data are more accurate than those obtained by using the Expectation Maximization Laplace algorithm and the Monte-Carlo Expectation Maximization one. Finally, since frailty models are used in many fields such as ecology, biology, economy, …, the proposed algorithm has a wide spectrum of applications.  相似文献   

20.
In this paper, we propose a penalized likelihood method to simultaneous select covariate, and mixing component and obtain parameter estimation in the localized mixture of experts models. We develop an expectation maximization algorithm to solve the proposed penalized likelihood procedure, and introduce a data-driven procedure to select the tuning parameters. Extensive numerical studies are carried out to compare the finite sample performances of our proposed method and other existing methods. Finally, we apply the proposed methodology to analyze the Boston housing price data set and the baseball salaries data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号