首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal at each step with a proper subset of the parameters are called in this paper partitioned algorithms. Partitioned algorithms in effect replace the original estimation problem with a series of problems of lower dimension. The purpose of the paper is to characterize some of the circumstances under which this process of dimension reduction leads to significant benefits.Four types of partitioned algorithms are distinguished: reduced objective function methods, nested (partial Gauss-Seidel) iterations, zigzag (full Gauss-Seidel) iterations, and leapfrog (non-simultaneous) iterations. Emphasis is given to Newton-type methods using analytic derivatives, but a nested EM algorithm is also given. Nested Newton methods are shown to be equivalent to applying to same Newton method to the reduced objective function, and are applied to separable regression and generalized linear models. Nesting is shown generally to improve the convergence of Newton-type methods, both by improving the quadratic approximation to the log-likelihood and by improving the accuracy with which the observed information matrix can be approximated. Nesting is recommended whenever a subset of parameters is relatively easily estimated. The zigzag method is shown to produce a stable but generally slow iteration; it is fast and recommended when the parameter subsets have approximately uncorrelated estimates. The leapfrog iteration has less guaranteed properties in general, but is similar to nesting and zigzagging when the parameter subsets are orthogonal.  相似文献   

2.
This article is concerned with the likelihood-based inference of vector autoregressive models with multivariate scaled t-distributed innovations by applying the EM-based (ECM and ECME) algorithms. The ECM and ECME algorithms, which are analytically quite simple to use, are applied to find the maximum likelihood estimates of the model parameters and then compared based on the computational running time and the accuracy of estimation via a simulation study. The results demonstrate that the ECME is efficient and usable in practice. We also show how the method can be applied to a multivariate dataset.  相似文献   

3.
For the problem of testing the homogeneity of the variances in a covariance matrix with a block compound symmetric structure, the likelihood ratio test is derived in this paper, A modification of the test that allows its distribution to be better approximated by the chi-square distribution is also considered, Formulae for calculating approximate sample size and power are derived, Small sample performances of these tests in the case of two dependent bivariate or trivariate normals are compared to each other and to the competing tests by simulating levels of significance and powers, and recommendation is made of the ones that have good performance, The recommended tests are then demonstrated in an illustrative example.  相似文献   

4.
A new algorithm is stated for the evaluation of the maximum likelihood estimators of the two-parameter gamma density. This, along with other approximations, is used to evaluate by quadrature, moments of the estimators of the shape and scale parameters.  相似文献   

5.
Two approximation methods are used to obtain the Bayes estimate for the renewal function of inverse Gaussian renewal process. Both approximations use a gamma-type conditional prior for the location parameter, a non-informative marginal prior for the shape parameter, and a squared error loss function. Simulations compare the accuracy of the estimators and indicate that the Tieney and Kadane (T–K)-based estimator out performs Maximum Likelihood (ML)- and Lindley (L)-based estimator. Computations for the T–K-based Bayes estimate employ the generalized Newton's method as well as a recent modified Newton's method with cubic convergence to maximize modified likelihood functions. The program is available from the author.  相似文献   

6.
A general maximum likelihood approach for estimating the effects of treatments applied to samples subject to regression to the mean is outlined. Models may be specified in terms of three factors: whether the treatment effect is multiplicative or additive, whether the treatment group is above or below some truncation point and the type of sample involved. The way in which solutions may be obtained for all 16 models so defined is described.  相似文献   

7.
An important problem in statistics is the study of longitudinal data taking into account the effect of other explanatory variables such as treatments and time. In this paper, a new Bayesian approach for analysing longitudinal data is proposed. This innovative approach takes into account the possibility of having nonlinear regression structures on the mean and linear regression structures on the variance–covariance matrix of normal observations, and it is based on the modelling strategy suggested by Pourahmadi [M. Pourahmadi, Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterizations, Biometrika, 87 (1999), pp. 667–690.]. We initially extend the classical methodology to accommodate the fitting of nonlinear mean models then we propose our Bayesian approach based on a generalization of the Metropolis–Hastings algorithm of Cepeda [E.C. Cepeda, Variability modeling in generalized linear models, Unpublished Ph.D. Thesis, Mathematics Institute, Universidade Federal do Rio de Janeiro, 2001]. Finally, we illustrate the proposed methodology by analysing one example, the cattle data set, that is used to study cattle growth.  相似文献   

8.
The cumulative exposure model (CEM) is a commonly used statistical model utilized to analyze data from a step-stress accelerated life testing which is a special class of accelerated life testing (ALT). In practice, researchers conduct ALT to: (1) determine the effects of extreme levels of stress factors (e.g., temperature) on the life distribution, and (2) to gain information on the parameters of the life distribution more rapidly than under normal operating (or environmental) conditions. In literature, researchers assume that the CEM is from well-known distributions, such as the Weibull family. This study, on the other hand, considers a p-step-stress model with q stress factors from the two-parameter Birnbaum-Saunders distribution when there is a time constraint on the duration of the experiment. In this comparison paper, we consider different frameworks to numerically compute the point estimation for the unknown parameters of the CEM using the maximum likelihood theory. Each framework implements at least one optimization method; therefore, numerical examples and extensive Monte Carlo simulations are considered to compare and numerically examine the performance of the considered estimation frameworks.  相似文献   

9.
In this paper, we propose a hidden Markov model for the analysis of the time series of bivariate circular observations, by assuming that the data are sampled from bivariate circular densities, whose parameters are driven by the evolution of a latent Markov chain. The model segments the data by accounting for redundancies due to correlations along time and across variables. A computationally feasible expectation maximization (EM) algorithm is provided for the maximum likelihood estimation of the model from incomplete data, by treating the missing values and the states of the latent chain as two different sources of incomplete information. Importance-sampling methods facilitate the computation of bootstrap standard errors of the estimates. The methodology is illustrated on a bivariate time series of wind and wave directions and compared with popular segmentation models for bivariate circular data, which ignore correlations across variables and/or along time.  相似文献   

10.
The authors propose a reduction technique and versions of the EM algorithm and the vertex exchange method to perform constrained nonparametric maximum likelihood estimation of the cumulative distribution function given interval censored data. The constrained vertex exchange method can be used in practice to produce likelihood intervals for the cumulative distribution function. In particular, the authors show how to produce a confidence interval with known asymptotic coverage for the survival function given current status data.  相似文献   

11.
The maximum likelihood equations for a multivariate normal model with structured mean and structured covariance matrix may not have an explicit solution. In some cases the model's error term may be decomposed as the sum of two independent error terms, each having a patterned covariance matrix, such that if one of the unobservable error terms is artificially treated as "missing data", the EM algorithm can be used to compute the maximum likelihood estimates for the original problem. Some decompositions produce likelihood equations which do not have an explicit solution at each iteration of the EM algorithm, but within-iteration explicit solutions are shown for two general classes of models including covariance component models used for analysis of longitudinal data.  相似文献   

12.
The seemingly unrelated regression model is viewed in the context of repeated measures analysis. Regression parameters and the variance-covariance matrix of the seemingly unrelated regression model can be estimated by using two-stage Aitken estimation. The first stage is to obtain a consistent estimator of the variance-covariance matrix. The second stage uses this matrix to obtain the generalized least squares estimators of the regression parameters. The maximum likelihood (ML) estimators of the regression parameters can be obtained by performing the two-stage estimation iteratively. The iterative two-stage estimation procedure is shown to be equivalent to the EM algorithm (Dempster, Laird, and Rubin, 1977) proposed by Jennrich and Schluchter (1986) and Laird, Lange, and Stram (1987) for repeated measures data. The equivalence of the iterative two-stage estimator and the ML estimator has been previously demonstrated empirically in a Monte Carlo study by Kmenta and Gilbert (1968). It does not appear to be widely known that the two estimators are equivalent theoretically. This paper demonstrates this equivalence.  相似文献   

13.
Incomplete growth curve data often result from missing or mistimed observations in a repeated measures design. Virtually all methods of analysis rely on the dispersion matrix estimates. A Monte Carlo simulation was used to compare three methods of estimation of dispersion matrices for incomplete growth curve data. The three methods were: 1) maximum likelihood estimation with a smoothing algorithm, which finds the closest positive semidefinite estimate of the pairwise estimated dispersion matrix; 2) a mixed effects model using the EM (estimation maximization) algorithm; and 3) a mixed effects model with the scoring algorithm. The simulation included 5 dispersion structures, 20 or 40 subjects with 4 or 8 observations per subject and 10 or 30% missing data. In all the simulations, the smoothing algorithm was the poorest estimator of the dispersion matrix. In most cases, there were no significant differences between the scoring and EM algorithms. The EM algorithm tended to be better than the scoring algorithm when the variances of the random effects were close to zero, especially for the simulations with 4 observations per subject and two random effects.  相似文献   

14.
In this work we present a simple estimation procedure for a general frailty model for analysis of prospective correlated failure times. Earlier work showed this method to perform well in a simulation study. Here we provide rigorous large-sample theory for the proposed estimators of both the regression coefficient vector and the dependence parameter, including consistent variance estimators.  相似文献   

15.
16.
17.
In this article, by using the constant and random selection matrices, several properties of the maximum likelihood (ML) estimates and the ML estimator of a normal distribution with missing data are derived. The constant selection matrix allows us to obtain an explicit form of the ML estimates and the exact relationship between the EM algorithm and the score function. The random selection matrix allows us to clarify how the missing-data mechanism works in the proof of the consistency of the ML estimator, to derive the asymptotic properties of the sequence by the EM algorithm, and to derive the information matrix.  相似文献   

18.
We obtain a generalization of the Chebyshev's inequality for random elements taking values in a separable Hilbert space with estimated mean and covariance.  相似文献   

19.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

20.
Xing-De Duan 《Statistics》2016,50(3):525-539
This paper develops a Bayesian approach to obtain the joint estimates of unknown parameters, nonparametric functions and random effects in generalized partially linear mixed models (GPLMMs), and presents three case deletion influence measures to identify influential observations based on the φ-divergence, Cook's posterior mean distance and Cook's posterior mode distance of parameters. Fisher's iterative scoring algorithm is developed to evaluate the posterior modes of parameters in GPLMMs. The first-order approximation to Cook's posterior mode distance is presented. The computationally feasible formulae for the φ-divergence diagnostic and Cook's posterior mean distance are given. Several simulation studies and an example are presented to illustrate our proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号