首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
We discuss here two examples of estimation by numerical maximization of penalized likelihood. We show that, in these examples, it is simpler not to use the EM algorithm for computation of the estimates or their standard errors. We discuss also confidence and credibility intervals based on penalized likelihood and a chi-squared approximate distribution, and compare such intervals with intervals of Wald type.

[Received July 2014. Revised September 2015.]  相似文献   

2.
In this work, we modify finite mixtures of factor analysers to provide a method for simultaneous clustering of subjects and multivariate discrete outcomes. The joint clustering is performed through a suitable reparameterization of the outcome (column)-specific parameters. We develop an expectation–maximization-type algorithm for maximum likelihood parameter estimation where the maximization step is divided into orthogonal sub-blocks that refer to row and column-specific parameters, respectively. Model performance is evaluated via a simulation study with varying sample size, number of outcomes and row/column-specific clustering (partitions). We compare the performance of our model with the performance of standard model-based biclustering approaches. The proposed method is also demonstrated on a benchmark data set where a multivariate binary response is considered.  相似文献   

3.
In this paper, we propose a penalized likelihood method to simultaneous select covariate, and mixing component and obtain parameter estimation in the localized mixture of experts models. We develop an expectation maximization algorithm to solve the proposed penalized likelihood procedure, and introduce a data-driven procedure to select the tuning parameters. Extensive numerical studies are carried out to compare the finite sample performances of our proposed method and other existing methods. Finally, we apply the proposed methodology to analyze the Boston housing price data set and the baseball salaries data set.  相似文献   

4.
ABSTRACT

We propose a new unsupervised learning algorithm to fit regression mixture models with unknown number of components. The developed approach consists in a penalized maximum likelihood estimation carried out by a robust expectation–maximization (EM)-like algorithm. We derive it for polynomial, spline, and B-spline regression mixtures. The proposed learning approach is unsupervised: (i) it simultaneously infers the model parameters and the optimal number of the regression mixture components from the data as the learning proceeds, rather than in a two-fold scheme as in standard model-based clustering using afterward model selection criteria, and (ii) it does not require accurate initialization unlike the standard EM for regression mixtures. The developed approach is applied to curve clustering problems. Numerical experiments on simulated and real data show that the proposed algorithm performs well and provides accurate clustering results, and confirm its benefit for practical applications.  相似文献   

5.
In this paper we propose a novel procedure, for the estimation of semiparametric survival functions. The proposed technique adapts penalized likelihood survival models to the context of lifetime value modeling. The method extends classical Cox model by introducing a smoothing parameter that can be estimated by means of penalized maximum likelihood procedures. Markov Chain Monte Carlo methods are employed to effectively estimate such smoothing parameter, using an algorithm which combines Metropolis–Hastings and Gibbs sampling. Our proposal is contextualized and compared with conventional models, with reference to a marketing application that involves the prediction of customer’s lifetime value estimation.  相似文献   

6.
We present a novel methodology for estimating the parameters of a finite mixture model (FMM) based on partially rank‐ordered set (PROS) sampling and use it in a fishery application. A PROS sampling design first selects a simple random sample of fish and creates partially rank‐ordered judgement subsets by dividing units into subsets of prespecified sizes. The final measurements are then obtained from these partially ordered judgement subsets. The traditional expectation–maximization algorithm is not directly applicable for these observations. We propose a suitable expectation–maximization algorithm to estimate the parameters of the FMMs based on PROS samples. We also study the problem of classification of the PROS sample into the components of the FMM. We show that the maximum likelihood estimators based on PROS samples perform substantially better than their simple random sample counterparts even with small samples. The results are used to classify a fish population using the length‐frequency data.  相似文献   

7.
This work provides a class of non‐Gaussian spatial Matérn fields which are useful for analysing geostatistical data. The models are constructed as solutions to stochastic partial differential equations driven by generalized hyperbolic noise and are incorporated in a standard geostatistical setting with irregularly spaced observations, measurement errors and covariates. A maximum likelihood estimation technique based on the Monte Carlo expectation‐maximization algorithm is presented, and a Monte Carlo method for spatial prediction is derived. Finally, an application to precipitation data is presented, and the performance of the non‐Gaussian models is compared with standard Gaussian and transformed Gaussian models through cross‐validation.  相似文献   

8.
Hailin Sang 《Statistics》2015,49(1):187-208
We propose a sparse coefficient estimation and automated model selection procedure for autoregressive processes with heavy-tailed innovations based on penalized conditional maximum likelihood. Under mild moment conditions on the innovation processes, the penalized conditional maximum likelihood estimator satisfies a strong consistency, OP(N?1/2) consistency, and the oracle properties, where N is the sample size. We have the freedom in choosing penalty functions based on the weak conditions on them. Two penalty functions, least absolute shrinkage and selection operator and smoothly clipped average deviation, are compared. The proposed method provides a distribution-based penalized inference to AR models, which is especially useful when the other estimation methods fail or under perform for AR processes with heavy-tailed innovations [Feigin, Resnick. Pitfalls of fitting autoregressive models for heavy-tailed time series. Extremes. 1999;1:391–422]. A simulation study confirms our theoretical results. At the end, we apply our method to a historical price data of the US Industrial Production Index for consumer goods, and obtain very promising results.  相似文献   

9.
We consider seven exact unconditional testing procedures for comparing adjusted incidence rates between two groups from a Poisson process. Exact tests are always preferable due to the guarantee of test size in small to medium sample settings. Han [Comparing two independent incidence rates using conditional and unconditional exact tests. Pharm Stat. 2008;7(3):195–201] compared the performance of partial maximization p-values based on the Wald test statistic, the likelihood ratio test statistic, the score test statistic, and the conditional p-value. These four testing procedures do not perform consistently, as the results depend on the choice of test statistics for general alternatives. We consider the approach based on estimation and partial maximization, and compare these to the ones studied by Han (2008) for testing superiority. The procedures are compared with regard to the actual type I error rate and power under various conditions. An example from a biomedical research study is provided to illustrate the testing procedures. The approach based on partial maximization using the score test is recommended due to the comparable performance and computational advantage in large sample settings. Additionally, the approach based on estimation and partial maximization performs consistently for all the three test statistics, and is also recommended for use in practice.  相似文献   

10.
Currently, extreme large-scale genetic data present significant challenges for cluster analysis. Most of the existing clustering methods are typically built on the Euclidean distance and geared toward analyzing continuous response. They work well for clustering, e.g. microarray gene expression data, but often perform poorly for clustering, e.g. large-scale single nucleotide polymorphism (SNP) data. In this paper, we study the penalized latent class model for clustering extremely large-scale discrete data. The penalized latent class model takes into account the discrete nature of the response using appropriate generalized linear models and adopts the lasso penalized likelihood approach for simultaneous model estimation and selection of important covariates. We develop very efficient numerical algorithms for model estimation based on the iterative coordinate descent approach and further develop the expectation–maximization algorithm to incorporate and model missing values. We use simulation studies and applications to the international HapMap SNP data to illustrate the competitive performance of the penalized latent class model.  相似文献   

11.
A maximum likelihood estimation procedure is presented for the frailty model. The procedure is based on a stochastic Expectation Maximization algorithm which converges quickly to the maximum likelihood estimate. The usual expectation step is replaced by a stochastic approximation of the complete log-likelihood using simulated values of unobserved frailties whereas the maximization step follows the same lines as those of the Expectation Maximization algorithm. The procedure allows to obtain at the same time estimations of the marginal likelihood and of the observed Fisher information matrix. Moreover, this stochastic Expectation Maximization algorithm requires less computation time. A wide variety of multivariate frailty models without any assumption on the covariance structure can be studied. To illustrate this procedure, a Gaussian frailty model with two frailty terms is introduced. The numerical results based on simulated data and on real bladder cancer data are more accurate than those obtained by using the Expectation Maximization Laplace algorithm and the Monte-Carlo Expectation Maximization one. Finally, since frailty models are used in many fields such as ecology, biology, economy, …, the proposed algorithm has a wide spectrum of applications.  相似文献   

12.
Penalized Maximum Likelihood Estimator for Normal Mixtures   总被引:1,自引:0,他引:1  
The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator. Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one.  相似文献   

13.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

14.
Abstract. Frailty models with a non‐parametric baseline hazard are widely used for the analysis of survival data. However, their maximum likelihood estimators can be substantially biased in finite samples, because the number of nuisance parameters associated with the baseline hazard increases with the sample size. The penalized partial likelihood based on a first‐order Laplace approximation still has non‐negligible bias. However, the second‐order Laplace approximation to a modified marginal likelihood for a bias reduction is infeasible because of the presence of too many complicated terms. In this article, we find adequate modifications of these likelihood‐based methods by using the hierarchical likelihood.  相似文献   

15.
We consider estimation of unknown parameters of a Burr XII distribution based on progressively Type I hybrid censored data. The maximum likelihood estimates are obtained using an expectation maximization algorithm. Asymptotic interval estimates are constructed from the Fisher information matrix. We obtain Bayes estimates under the squared error loss function using the Lindley method and Metropolis–Hastings algorithm. The predictive estimates of censored observations are obtained and the corresponding prediction intervals are also constructed. We compare the performance of the different methods using simulations. Two real datasets have been analyzed for illustrative purposes.  相似文献   

16.

Motivated by penalized likelihood maximization in complex models, we study optimization problems where neither the function to optimize nor its gradient has an explicit expression, but its gradient can be approximated by a Monte Carlo technique. We propose a new algorithm based on a stochastic approximation of the proximal-gradient (PG) algorithm. This new algorithm, named stochastic approximation PG (SAPG) is the combination of a stochastic gradient descent step which—roughly speaking—computes a smoothed approximation of the gradient along the iterations, and a proximal step. The choice of the step size and of the Monte Carlo batch size for the stochastic gradient descent step in SAPG is discussed. Our convergence results cover the cases of biased and unbiased Monte Carlo approximations. While the convergence analysis of some classical Monte Carlo approximation of the gradient is already addressed in the literature (see Atchadé et al. in J Mach Learn Res 18(10):1–33, 2017), the convergence analysis of SAPG is new. Practical implementation is discussed, and guidelines to tune the algorithm are given. The two algorithms are compared on a linear mixed effect model as a toy example. A more challenging application is proposed on nonlinear mixed effect models in high dimension with a pharmacokinetic data set including genomic covariates. To our best knowledge, our work provides the first convergence result of a numerical method designed to solve penalized maximum likelihood in a nonlinear mixed effect model.

  相似文献   

17.
We propose penalized minimum φ-divergence estimator for parameter estimation and variable selection in logistic regression. Using an appropriate penalty function, we show that penalized φ-divergence estimator has oracle property. With probability tending to 1, penalized φ-divergence estimator identifies the true model and estimates nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized φ-divergence estimator is that it produces estimates of nonzero parameters efficiently than penalized maximum likelihood estimator when sample size is small and is equivalent to it for large one. Numerical simulations confirm our findings.  相似文献   

18.
Unobservable individual effects in models of duration will cause estimation bias that include the structural parameters as well as the duration dependence. The maximum penalized likelihood estimator is examined as an estimator for the survivor model with heterogeneity. Proofs of the existence and uniqueness of the maximum penalized likelihood estimator in duration model with general forms of unobserved heterogeneity are provided. Some small sample evidence on the behavior of the maximum penalized likelihood estimator is given. The maximum penalized likelihood estimator is shown to be computationally feasible and to provide reasonable estimates in most cases.  相似文献   

19.
In this paper, we propose a new iterative sparse algorithm (ISA) to compute the maximum likelihood estimator (MLE) or penalized MLE of the mixed effects model. The sparse approximation based on the arrow-head (A-H) matrix is one solution which is popularly used in practice. The A-H method provides an easy computation of the inverse of the Hessian matrix and is computationally efficient. However, it often has non-negligible error in approximating the inverse of the Hessian matrix and in the estimation. Unlike the A-H method, in the ISA, the sparse approximation is applied “iteratively” to reduce the approximation error at each Newton Raphson step. The advantages of the ISA over the exact and A-H method are illustrated using several synthetic and real examples.  相似文献   

20.
Abstract.  In finite mixtures of location–scale distributions, if there is no constraint or penalty on the parameters, then the maximum likelihood estimator does not exist because the likelihood is unbounded. To avoid this problem, we consider a penalized likelihood, where the penalty is a function of the minimum of the ratios of the scale parameters and the sample size. It is shown that the penalized maximum likelihood estimator is strongly consistent. We also analyse the consistency of a penalized maximum likelihood estimator where the penalty is imposed on the scale parameters themselves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号