首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in Tsodikov [2003. Semiparametric models: a generalized self-consistency approach. J. Roy. Statist. Soc. Ser. B Statist. Methodology 65(3), 759–774] and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker [1994. The Multinomial–Poisson transformation. The Statist. 43, 495–504] to develop the Multinomial–Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers [Lang, J., 1996. On the comparison of multinomial and Poisson log-linear models. J. Roy. Statist. Soc. Ser. B Statist. Methodology 58, 253–266] justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial “missing data” (fake mixture). Lack of probabilistic interpretation at the “complete-data” level makes the use of the generalized self-consistency machinery essential.  相似文献   

2.
In a multinomial model, the sample space is partitioned into a disjoint union of cells. The partition is usually immutable during sampling of the cell counts. In this paper, we extend the multinomial model to the incomplete multinomial model by relaxing the constant partition assumption to allow the cells to be variable and the counts collected from non-disjoint cells to be modeled in an integrated manner for inference on the common underlying probability. The incomplete multinomial likelihood is parameterized by the complete-cell probabilities from the most refined partition. Its sufficient statistics include the variable-cell formation observed as an indicator matrix and all cell counts. With externally imposed structures on the cell formation process, it reduces to special models including the Bradley–Terry model, the Plackett–Luce model, etc. Since the conventional method, which solves for the zeros of the score functions, is unfruitful, we develop a new approach to establishing a simpler set of estimating equations to obtain the maximum likelihood estimate (MLE), which seeks the simultaneous maximization of all multiplicative components of the likelihood by fitting each component into an inequality. As a consequence, our estimation amounts to solving a system of the equality attainment conditions to the inequalities. The resultant MLE equations are simple and immediately invite a fixed-point iteration algorithm for solution, which is referred to as the weaver algorithm. The weaver algorithm is short and amenable to parallel implementation. We also derive the asymptotic covariance of the MLE, verify main results with simulations, and compare the weaver algorithm with an MM/EM algorithm based on fitting a Plackett–Luce model to a benchmark data set.  相似文献   

3.
A multivariate binary distribution that incorporates the correlation between individual variables is considered. The availability of auxiliary information taking the form of simple ordering constraints on their expected values is assumed. The problem of constructing constraint-preserving estimates for expectations is formulated as conditional maximization of convex likelihood function for corresponding multinomial distribution with suitably chosen restrictions. Starting values for convex optimization algorithms are proposed. The proposed estimator is consistent under mild assumptions.  相似文献   

4.
In this work, the multinomial mixture model is studied, through a maximum likelihood approach. The convergence of the maximum likelihood estimator to a set with characteristics of interest is shown. A method to select the number of mixture components is developed based on the form of the maximum likelihood estimator. A simulation study is then carried out to verify its behavior. Finally, two applications on real data of multinomial mixtures are presented.  相似文献   

5.
The expectation maximization (EM) algorithm is a widely used parameter approach for estimating the parameters of multivariate multinomial mixtures in a latent class model. However, this approach has unsatisfactory computing efficiency. This study proposes a fuzzy clustering algorithm (FCA) based on both the maximum penalized likelihood (MPL) for the latent class model and the modified penalty fuzzy c-means (PFCM) for normal mixtures. Numerical examples confirm that the FCA-MPL algorithm is more efficient (that is, requires fewer iterations) and more computationally effective (measured by the approximate relative ratio of accurate classification) than the EM algorithm.  相似文献   

6.
This article proposes the use of optimization techniques and tools to maximize the likelihood if maximization cannot be easily accomplished with standard statistical software. In such situations, the use of the programming language AMPL with the freely available optimization solvers under the NEOS Server is an attractive alternative to algorithms developed for specific optimization problems in statistics. This article is meant to be a short tutorial introducing statisticians to these methods and tools. We provide an example to illustrate these methods. The necessary files for maximization are included in the Appendix so that the reader can carry out the optimization procedure described.  相似文献   

7.
This paper compares the application of different versions of the simulated counterparts of the Wald test, the score test, and the likelihood ratio test in one- and multiperiod multinomial probit models. Monte Carlo experiments show that the use of the simple form of the simulated likelihood ratio test delivers relatively robust results regarding the testing of several multinomial probit model specifications. In contrast, the inclusion of the Hessian matrix of the simulated loglikelihood function into the simulated score test and (in the multiperiod multinomial probit model) particularly the inclusion of the quasi-maximum likelihood theory into the simulated likelihood ratio test leads to substantial computational problems. The combined application of the quasi-maximum likelihood theory with the simulated Wald test or the simulated score test is not systematically superior to the application of the other versions of these two simulated classical tests either. Neither an increase in the number of observations nor in the number of random draws in the incorporated Geweke-Hajivassiliou-Keane simulator systematically lead to more precise conformities between the frequencies of type I errors and the basic significance levels. An increase in the number of observations only decreases the frequencies of type II errors, particularly regarding the simulated classical testing of multiperiod multinomial probit model specifications.  相似文献   

8.
This paper deals with the prblem of estimating simultaneously the parameters (Cell probabilities) of m ≤ 2 independent multinomial distributions, with respect to a quadratic loss functions. An empirical Bayes estimator is proposed which is shown to have smaller risk than the maximum likelihood estimator for sufficiently large values of mq, where q is a measure of the average diversity of the given multinomial populations. Some numerical results are given on the performance of the proposed estimator.  相似文献   

9.
Definitions are given for orthogonal parameters in the context of Bayesian inference and likelihood inference. The exact orthogonalizing transformations are derived for both cases, and the connection between the two settings is made precise. These parametrizations simplify the interpretation of likelihood functions and posterior distributions. Further, they make numerical maximization and integration procedures easier to apply. Several applications are studied.  相似文献   

10.
Summary.  We propose a generic on-line (also sometimes called adaptive or recursive) version of the expectation–maximization (EM) algorithm applicable to latent variable models of independent observations. Compared with the algorithm of Titterington, this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete-data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback–Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e. that of the maximum likelihood estimator. In addition, the approach proposed is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.  相似文献   

11.
Semiparametric maximum likelihood estimators have recently been proposed for a class of two‐phase, outcome‐dependent sampling models. All of them were “restricted” maximum likelihood estimators, in the sense that the maximization is carried out only over distributions concentrated on the observed values of the covariate vectors. In this paper, the authors give conditions for consistency of these restricted maximum likelihood estimators. They also consider the corresponding unrestricted maximization problems, in which the “absolute” maximum likelihood estimators may then have support on additional points in the covariate space. Their main consistency result also covers these unrestricted maximum likelihood estimators, when they exist for all sample sizes.  相似文献   

12.
The inverse Gaussian-Poisson (two-parameter Sichel) distribution is useful in fitting overdispersed count data. We consider linear models on the mean of a response variable, where the response is in the form of counts exhibiting extra-Poisson variation, and assume an IGP error distribution. We show how maximum likelihood estimation may be carried out using iterative Newton-Raphson IRLS fitting, where GLIM is used for the IRLS part of the maximization. Approximate likelihood ratio tests are given.  相似文献   

13.
The asymmetric Laplace likelihood naturally arises in the estimation of conditional quantiles of a response variable given covariates. The estimation of its parameters entails unconstrained maximization of a concave and non-differentiable function over the real space. In this note, we describe a maximization algorithm based on the gradient of the log-likelihood that generates a finite sequence of parameter values along which the likelihood increases. The algorithm can be applied to the estimation of mixed-effects quantile regression, Laplace regression with censored data, and other models based on Laplace likelihood. In a simulation study and in a number of real-data applications, the proposed algorithm has shown notable computational speed.  相似文献   

14.
This article investigates the Farlie–Gumbel–Morgenstern class of models for exchangeable continuous data. We show how the model specification can account for both individual and cluster level covariates, we derive insights from comparisons with the multivariate normal distribution, and we discuss maximum likelihood inference when a sample of independent clusters of varying sizes is available. We propose a method for maximum likelihood estimation which is an alternative to direct numerical maximization of the likelihood that sometimes exhibits non-convergence problems. We describe an algorithm for generating samples from the exchangeable multivariate Farlie–Gumbel–Morgenstern distribution with any marginals, using the structural properties of the distribution. Finally, we present the results of a simulation study designed to assess the properties of the maximum likelihood estimators, and we illustrate the use of the FGM distributions with the analysis of a small data set from a developmental toxicity study.  相似文献   

15.
Many problems in Statistics involve maximizing a multinomial likelihood over a restricted region. In this paper, we consider instead maximizing a weighted multinomial likelihood. We show that a dual problem always exits which is frequently more tractable and that a solution to the dual problem leads directly to a solution of the primal problem. Moreover, the form of the dual problem suggests an iterative algorithm for solving the MLE problem when the constraint region can be written as a finite intersection of cones. We show that this iterative algorithm is guaranteed to converge to the true solution and show that when the cones are isotonic, this algorithm is a version of Dykstra's algorithm (Dykstra, J. Amer. Statist. Assoc. 78 (1983) 837–842) for the special case of least squares projection onto the intersection of isotonic cones. We give several meaningful examples to illustrate our results. In particular, we obtain the nonparametric maximum likelihood estimator of a monotone density function in the presence of selection bias.  相似文献   

16.
Due to the irregularity of finite mixture models, the commonly used likelihood-ratio statistics often have complicated limiting distributions. We propose to add a particular type of penalty function to the log-likelihood function. The resulting penalized likelihood-ratio statistics have simple limiting distributions when applied to finite mixture models with multinomial observations. The method is especially effective in addressing the problems discussed by Chernoff and Lander (1995). The theory developed and simulations conducted show that the penalized likelihood method can give very good results, better than the well-known C(α) procedure, for example. The paper does not, however, fully explore the choice of penalty function and weight. The full potential of the new procedure is to be explored in the future.  相似文献   

17.
Simulation-based inference for partially observed stochastic dynamic models is currently receiving much attention due to the fact that direct computation of the likelihood is not possible in many practical situations. Iterated filtering methodologies enable maximization of the likelihood function using simulation-based sequential Monte Carlo filters. Doucet et al. (2013) developed an approximation for the first and second derivatives of the log likelihood via simulation-based sequential Monte Carlo smoothing and proved that the approximation has some attractive theoretical properties. We investigated an iterated smoothing algorithm carrying out likelihood maximization using these derivative approximations. Further, we developed a new iterated smoothing algorithm, using a modification of these derivative estimates, for which we establish both theoretical results and effective practical performance. On benchmark computational challenges, this method beat the first-order iterated filtering algorithm. The method’s performance was comparable to a recently developed iterated filtering algorithm based on an iterated Bayes map. Our iterated smoothing algorithm and its theoretical justification provide new directions for future developments in simulation-based inference for latent variable models such as partially observed Markov process models.  相似文献   

18.
In binomial or multinomial problems when the parameter space is restricted or truncated to a subset of the natural parameter space, the maximum likelihood estimator (MLE) may be inadmissible under squared error loss. A quite general condition for the inadmissibility of MLEs in such cases can be established using the stepwise Bayes technique and the complete class theorem of Brown.  相似文献   

19.
The conventional Cox proportional hazards regression model contains a loglinear relative risk function, linking the covariate information to the hazard ratio with a finite number of parameters. A generalization, termed the partly linear Cox model, allows for both finite dimensional parameters and an infinite dimensional parameter in the relative risk function, providing a more robust specification of the relative risk function. In this work, a likelihood based inference procedure is developed for the finite dimensional parameters of the partly linear Cox model. To alleviate the problems associated with a likelihood approach in the presence of an infinite dimensional parameter, the relative risk is reparameterized such that the finite dimensional parameters of interest are orthogonal to the infinite dimensional parameter. Inference on the finite dimensional parameters is accomplished through maximization of the profile partial likelihood, profiling out the infinite dimensional nuisance parameter using a kernel function. The asymptotic distribution theory for the maximum profile partial likelihood estimate is established. It is determined that this estimate is asymptotically efficient; the orthogonal reparameterization enables employment of profile likelihood inference procedures without adjustment for estimation of the nuisance parameter. An example from a retrospective analysis in cancer demonstrates the methodology.  相似文献   

20.
A new functional form of the response probability for a qualitative response model is proposed. The new model is flexible enough to avoid the constraint of independence from irrelevant alternatives, which is perceived as a weakness of the multinomial logit model in some applications. It is computationally simpler than the multinomial probit model and is promising for analyzing problems with a moderate number of alternatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号