首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Penalized Maximum Likelihood Estimator for Normal Mixtures   总被引:1,自引:0,他引:1  
The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator. Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one.  相似文献   

2.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

3.
We construct bootstrap confidence intervals for smoothing spline estimates based on Gaussian data, and penalized likelihood smoothing spline estimates based on data from .exponential families. Several vari- ations of bootstrap confidence intervals are considered and compared. We find that the commonly used ootstrap percentile intervals are inferior to the T intervals and to intervals based on bootstrap estimation of mean squared errors. The best variations of the bootstrap confidence intervals behave similar to the well known Bayesian confidence intervals. These bootstrap confidence intervals have an average coverage probability across the function being estimated, as opposed to a pointwise property.  相似文献   

4.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

5.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

6.
We propose a general family of nonparametric mixed effects models. Smoothing splines are used to model the fixed effects and are estimated by maximizing the penalized likelihood function. The random effects are generic and are modelled parametrically by assuming that the covariance function depends on a parsimonious set of parameters. These parameters and the smoothing parameter are estimated simultaneously by the generalized maximum likelihood method. We derive a connection between a nonparametric mixed effects model and a linear mixed effects model. This connection suggests a way of fitting a nonparametric mixed effects model by using existing programs. The classical two-way mixed models and growth curve models are used as examples to demonstrate how to use smoothing spline analysis-of-variance decompositions to build nonparametric mixed effects models. Similarly to the classical analysis of variance, components of these nonparametric mixed effects models can be interpreted as main effects and interactions. The penalized likelihood estimates of the fixed effects in a two-way mixed model are extensions of James–Stein shrinkage estimates to correlated observations. In an example three nested nonparametric mixed effects models are fitted to a longitudinal data set.  相似文献   

7.
Abstract.  In finite mixtures of location–scale distributions, if there is no constraint or penalty on the parameters, then the maximum likelihood estimator does not exist because the likelihood is unbounded. To avoid this problem, we consider a penalized likelihood, where the penalty is a function of the minimum of the ratios of the scale parameters and the sample size. It is shown that the penalized maximum likelihood estimator is strongly consistent. We also analyse the consistency of a penalized maximum likelihood estimator where the penalty is imposed on the scale parameters themselves.  相似文献   

8.
Multi-state Models in Epidemiology   总被引:1,自引:0,他引:1  
I first discuss the main assumptions which can be made for multi-state models: the time-homogeneity and semi-Markov assumptions, the problem of choice of the time scale, the assumption of homogeneity of the population and also assumptions about the way the observations are incomplete, leading to truncation and censoring. The influence of covariates and different durations and time-dependent variables are synthesized using explanatory processes, and a general additive model for transition intensities presented. Different inference approaches, including penalized likelihood, are considered. Finally three examples of application in epidemiology are presented and some references to other works are given.  相似文献   

9.
We discuss the maximum likelihood estimates (MLEs) of the parameters of the log-gamma distribution based on progressively Type-II censored samples. We use the profile likelihood approach to tackle the problem of the estimation of the shape parameter κ. We derive approximate maximum likelihood estimators of the parameters μ and σ and use them as initial values in the determination of the MLEs through the Newton–Raphson method. Next, we discuss the EM algorithm and propose a modified EM algorithm for the determination of the MLEs. A simulation study is conducted to evaluate the bias and mean square error of these estimators and examine their behavior as the progressive censoring scheme and the shape parameter vary. We also discuss the interval estimation of the parameters μ and σ and show that the intervals based on the asymptotic normality of MLEs have very poor probability coverages for small values of m. Finally, we present two examples to illustrate all the methods of inference discussed in this paper.  相似文献   

10.
Recurrent event data arise in many biomedical and engineering studies when failure events can occur repeatedly over time for each study subject. In this article, we are interested in nonparametric estimation of the hazard function for gap time. A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate. Method for smoothing parameter selection is developed from subject-wise cross-validation. Confidence intervals for the hazard function are derived using the Bayes model of the penalized likelihood. An eigenvalue analysis establishes the asymptotic convergence rates of the relevant estimates. Empirical studies are performed to evaluate various aspects of the method. The proposed technique is demonstrated through an application to the well-known bladder tumor cancer data.  相似文献   

11.
The problem of predicting times to failure of units from the Exponential Distribution which are censored under a simple step-stress model is considered in this article. We discuss two types of censoring—regular and progressive Type I—and two kinds of predictors—the maximum likelihood predictors (MLP) and the conditional median predictors (CMP) for each type of censoring. Numerical examples are used to illustrate the prediction methods. Using simulation studies, mean squared prediction error (MSPE) and prediction intervals are generated for these examples. MLP and the CMP are then compared with respect to MSPE and the prediction interval.  相似文献   

12.
Unobservable individual effects in models of duration will cause estimation bias that include the structural parameters as well as the duration dependence. The maximum penalized likelihood estimator is examined as an estimator for the survivor model with heterogeneity. Proofs of the existence and uniqueness of the maximum penalized likelihood estimator in duration model with general forms of unobserved heterogeneity are provided. Some small sample evidence on the behavior of the maximum penalized likelihood estimator is given. The maximum penalized likelihood estimator is shown to be computationally feasible and to provide reasonable estimates in most cases.  相似文献   

13.
In this article we propose a penalized likelihood approach for the semiparametric density model with parametric and nonparametric components. An efficient iterative procedure is proposed for estimation. Approximate generalized maximum likelihood criterion from Bayesian point of view is derived for selecting the smoothing parameter. The finite sample performance of the proposed estimation approach is evaluated through simulation. Two real data examples, suicide study data and Old Faithful geyser data, are analyzed to demonstrate use of the proposed method.  相似文献   

14.
We discuss a new way of constructing pointwise confidence intervals for the distribution function in the current status model. The confidence intervals are based on the smoothed maximum likelihood estimator, using local smooth functional theory and normal limit distributions. Bootstrap methods for constructing these intervals are considered. Other methods to construct confidence intervals, using the non‐standard limit distribution of the (restricted) maximum likelihood estimator, are compared with our approach via simulations and real data applications.  相似文献   

15.
We propose penalized minimum φ-divergence estimator for parameter estimation and variable selection in logistic regression. Using an appropriate penalty function, we show that penalized φ-divergence estimator has oracle property. With probability tending to 1, penalized φ-divergence estimator identifies the true model and estimates nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized φ-divergence estimator is that it produces estimates of nonzero parameters efficiently than penalized maximum likelihood estimator when sample size is small and is equivalent to it for large one. Numerical simulations confirm our findings.  相似文献   

16.
In this paper, we propose a penalized likelihood method to simultaneous select covariate, and mixing component and obtain parameter estimation in the localized mixture of experts models. We develop an expectation maximization algorithm to solve the proposed penalized likelihood procedure, and introduce a data-driven procedure to select the tuning parameters. Extensive numerical studies are carried out to compare the finite sample performances of our proposed method and other existing methods. Finally, we apply the proposed methodology to analyze the Boston housing price data set and the baseball salaries data set.  相似文献   

17.
Summary.  Likelihood inference for discretely observed Markov jump processes with finite state space is investigated. The existence and uniqueness of the maximum likelihood estimator of the intensity matrix are investigated. This topic is closely related to the imbedding problem for Markov chains. It is demonstrated that the maximum likelihood estimator can be found either by the EM algorithm or by a Markov chain Monte Carlo procedure. When the maximum likelihood estimator does not exist, an estimator can be obtained by using a penalized likelihood function or by the Markov chain Monte Carlo procedure with a suitable prior. The methodology and its implementation are illustrated by examples and simulation studies.  相似文献   

18.
Abstract.  We consider robust methods of likelihood and frequentist inference for the nonlinear parameter, say α , in conditionally linear nonlinear regression models. We derive closed-form expressions for robust conditional, marginal, profile and modified profile likelihood functions for α under elliptically contoured data distributions. Next, we develop robust exact-F confidence intervals for α and consider robust Fieller intervals for ratios of regression parameters in linear models. Several well-known examples are considered and Monte Carlo simulation results are presented.  相似文献   

19.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

20.
A penalized likelihood approach to the estimation of calibration factors in positron emission tomography (PET) is considered, in particular the problem of estimating the efficiency of PET detectors. Varying efficiencies among the detectors create a non-uniform performance and failure to account for the non-uniformities would lead to streaks in the image, so efficient estimation of the non-uniformities is desirable to reduce the propagation of noise to the final image. The relevant data set is provided by a blank scan, where a model may be derived that depends only on the sources affecting non-uniformities: inherent variation among the detector crystals and geometric effects. Physical considerations suggest a novel mixed inverse model with random crystal effects and smooth geometric effects. Using appropriate penalty terms, the penalized maximum likelihood estimates are derived and an efficient computational algorithm utilizing the fast Fourier transform is developed. Data-driven shrinkage and smoothing parameters are chosen to minimize an estimate of the predictive loss function. Various examples indicate that the approach proposed works well computationally and compares well with the standard method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号