首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The estimation of parameters of the log normal distribution based on complete and censored samples are considered in the literature. In this article, the problem of estimating the parameters of log normal mixture model is considered. The Expectation Maximization algorithm is used to obtain maximum likelihood estimators for the parameters, as the likelihood equation does not yield closed form expression. The standard errors of the estimates are obtained. The methodology developed here is then illustrated through simulation studies. The confidence interval based on large-sample theory is obtained.  相似文献   

2.
Interval-censored data arise in a wide variety of application and research areas such as, for example, AIDS studies (Kim et al ., 1993) and cancer research (Finkelstein, 1986; Becker & Melbye, 1991). Peto (1973) proposed a Newton–Raphson algorithm for obtaining a generalized maximum likelihood estimate (GMLE) of the survival function with interval-cen sored observations. Turnbull (1976) proposed a self-consistent algorithm for interval-censored data and obtained the same GMLE. Groeneboom & Wellner (1992) used the convex minorant algorithm for constructing an estimator of the survival function with "case 2" interval-censored data. However, as is known, the GMLE is not uniquely defined on the interval [0, ∞]. In addition, Turnbull's algorithm leads to a self-consistent equation which is not in the form of an integral equation. Large sample properties of the GMLE have not been previously examined because of, we believe, among other things, the lack of such an integral equation. In this paper, we present an EM algorithm for constructing a GMLE on [0, ∞]. The GMLE is expressed as a solution of an integral equation. More recently, with the help of this integral equation, Yu et al . (1997a, b) have shown that the GMLE is consistent and asymptotically normally distributed. An application of the proposed GMLE is presented  相似文献   

3.
The cumulative exposure model (CEM) is a commonly used statistical model utilized to analyze data from a step-stress accelerated life testing which is a special class of accelerated life testing (ALT). In practice, researchers conduct ALT to: (1) determine the effects of extreme levels of stress factors (e.g., temperature) on the life distribution, and (2) to gain information on the parameters of the life distribution more rapidly than under normal operating (or environmental) conditions. In literature, researchers assume that the CEM is from well-known distributions, such as the Weibull family. This study, on the other hand, considers a p-step-stress model with q stress factors from the two-parameter Birnbaum-Saunders distribution when there is a time constraint on the duration of the experiment. In this comparison paper, we consider different frameworks to numerically compute the point estimation for the unknown parameters of the CEM using the maximum likelihood theory. Each framework implements at least one optimization method; therefore, numerical examples and extensive Monte Carlo simulations are considered to compare and numerically examine the performance of the considered estimation frameworks.  相似文献   

4.
5.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) may severely under-estimate the survival function with left truncated data. Based on the Nelson estimator (for right censored data) and self-consistency we suggest a nonparametric estimator of the survival function, the iterative Nelson estimator (INE), for arbitrarily truncated and censored data, where only few nonparametric estimators are available. By simulation we show that the INE does well in overcoming the under-estimation of the survival function from the NPMLE for left-truncated and interval-censored data. An interesting application of the INE is as a diagnostic tool for other estimators, such as the monotone MLE or parametric MLEs. The methodology is illustrated by application to two real world problems: the Channing House and the Massachusetts Health Care Panel Study data sets.  相似文献   

6.
Studies on maturation and body composition mention age at peak height velocity (PHV) as an important measure that could predict adulthood outcome. The age at PHV is often derived from growth models such as the triple logistic fitted to the stature (height) data. Theoretically, for a well-behaved growth function, age at PHV could be obtained by setting the second derivative of the growth function to zero and solving for age. Such a solution obviously depends on the parameters of the growth function. Therefore, the uncertainty in the estimation of age at PHV resulting from the uncertainty in the estimation of the growth model, need to be accounted for in the models in which it is used as a predictor. Explicit expressions for the age at PHV and, consequently the variance of the estimate of the age at PHV, do not exist for some of the commonly used nonlinear growth functions, such as the triple logistic function. Once an estimate of this variance is obtained, it could be incorporated in subsequent modeling either through measurement error models or by using the inverse variances as weights. A numerical method for estimating the variance is implemented. The accuracy of this method is demonstrated through comparisons in models where explicit solution for the variance exists. The method of estimating the variance is illustrated by applying to growth data from the Fels study and subsequently used as weights in modeling two adulthood outcomes from the same study.  相似文献   

7.
ABSTRACT

The systematic sampling (SYS) design (Madow and Madow, 1944 Madow , L. H. , Madow , W. G. ( 1944 ). On the theory of systematic sampling . Ann. Math. Statist. 15 : 124 .[Crossref] [Google Scholar]) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]) and usual variance estimators (Yates and Grundy, 1953 Yates , F. , Grundy , P. M. ( 1953 ). Selection without replacement from within strata with probability proportional to size . J. Roy. Statist. Soc. Series B 1 : 253261 . [Google Scholar]) are inadequate and can overestimate the variance significantly (Särndal et al., 1992 Särndal , C. E. , Swenson , B. , Wretman , J. H. ( 1992 ). Model Assisted Survey Sampling . New York : Springer-Verlag , Ch. 3 .[Crossref] [Google Scholar]). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.  相似文献   

8.
This article is concerned with the likelihood-based inference of vector autoregressive models with multivariate scaled t-distributed innovations by applying the EM-based (ECM and ECME) algorithms. The ECM and ECME algorithms, which are analytically quite simple to use, are applied to find the maximum likelihood estimates of the model parameters and then compared based on the computational running time and the accuracy of estimation via a simulation study. The results demonstrate that the ECME is efficient and usable in practice. We also show how the method can be applied to a multivariate dataset.  相似文献   

9.
It is shown that the classical Wicksell problem is related to a deconvolution problem where the convolution kernel is unbounded, convex and decreasing on (0, ∞). For that type of deconvolution problems, the usual non-parametric maximum likelihood estimator of the distribution function is shown not to exist. A sieved maximum likelihood estimator is defined, and some algorithms are described that can be used to compute this estimator. Moreover, this estimator is proved to be strongly consistent.  相似文献   

10.
The Hidden semi-Markov models (HSMMs) were introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs were proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi-Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008 Barbu , V. , Limnios , N. ( 2008 ). Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis . New York : Springer . [Google Scholar]) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this article, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.  相似文献   

11.
Maximum likelihood estimation of a mean and a covariance matrix whose structure is constrained only to general positive semi-definiteness is treated in this paper. Necessary and sufficient conditions for the local optimality of mean and covariance matrix estimates are given. Observations are assumed to be independent. When the observations are also assumed to be identically distributed, the optimality conditions are used to obtain the mean and covariance matrix solutions in closed form. For the nonidentically distributed observation case, a general numerical technique which integrates scoring and Newton's iterations to solve the optimality condition equations is presented, and convergence performance is examined.  相似文献   

12.
A simple computational method for estimation of parameters via a type of EM algorithm is proposed in restricted latent class analysis, where equality and constant constraints are considered. These constraints create difficulty in estimation. In order to simply and stably estimate parameters in restricted latent class analysis, a simple computational method using only first-order differentials is proposed, where the step-halving method is adopted. A simulation study shows that in almost all cases the new method gives parameter sequences monotonously increasing the Q-function in the EM algorithm. Analysis of real data is provided.  相似文献   

13.
Quasi-likelihood nonlinear models (QLNM) are a further extension of generalized linear models by only specifying the expectation and variance functions of the response variable. In this article, some mild regularity conditions are proposed. These regularity conditions, respectively, assure the existence, strong consistency, and the asymptotic normality of the maximum quasi-likelihood estimator (MQLE) in QLNM.  相似文献   

14.
Under the generalized linear models for a binary variable, an approximate bias of the maximum likelihood estimator of the coefficient, that is a special case of linear parameter in Cordeiro and McCullagh (1991), is derived without a calculation of the third-order derivative of the log likelihood function. Using the obtained approximate bias of the maximum likelihood estimator, a bias-corrected maximum likelihood estimator is defined. Through a simulation study, we show that the bias-corrected maximum likelihood estimator and its variance estimator have a better performance than the maximum likelihood estimator and its variance estimator.  相似文献   

15.
Abstract.  In the context of the univariate Gaussian mixture with grouped data, it is shown that the global maximum of the likelihood may correspond to a situation where a Dirac lies in any non-empty interval. Existence of a domain of attraction near such a maximizer is discussed and we establish that the expectation-maximization (EM) iterates move extremely slowly inside this domain. These theoretical results are illustrated both by some Monte-Carlo experiments and by a real data set. To help practitioners identify and discard these potentially dangerous degenerate maximizers, a specific stopping rule for EM is proposed.  相似文献   

16.
The Lomax (Pareto II) distribution has found wide application in a variety of fields. We analyze the second-order bias of the maximum likelihood estimators of its parameters for finite sample sizes, and show that this bias is positive. We derive an analytic bias correction which reduces the percentage bias of these estimators by one or two orders of magnitude, while simultaneously reducing relative mean squared error. Our simulations show that this performance is very similar to that of a parametric bootstrap correction based on a linear bias function. Three examples with actual data illustrate the application of our bias correction.  相似文献   

17.
Lehmann (1983) discussed several examples of absurd uniform minimum variance unbiased (UMVU) estimators. He argued that these estimators arose because the amount of information available was inadequate for the estimation problem at hand. Here I argue that such absurd UMVU estimators result more from the property of unbiasedness than from inadequate information.  相似文献   

18.
An EM algorithm (Dempster et al., 1977) is derived for the estimation of parameters of the truncated bivariate Poisson distribution with zeros rnissing from both margins. The observed inforrnation matrix is obtained and a numerical exarnple is given where the convergence of the EM algorithm is accelerated by the methods of Louis (1982) and conjugate gradients (Jamshidian antl Jennrich, 1993).  相似文献   

19.
This article is concerned with the parameter estimation in linear regression model when it is suspected that the regression coefficients are the subspace of the equality restrictions. The objective of this article is to introduce the preliminary test almost unbiased Liu estimators (PTAULE) based on the Wald (W), the likelihood ratio (LR), and the Lagrangian multiplier (LM) tests and compare the proposed estimators in the sense of the quadratic bias and mean square error (MSE) criterion.  相似文献   

20.
Recently, various studies have used the Poisson Pseudo-Maximal Likehood (PML) to estimate gravity specifications of trade flows and non-count data models more generally. Some papers also report results based on the Negative Binomial Quasi-Generalised Pseudo-Maximum Likelihood (NB QGPML) estimator, which encompasses the Poisson assumption as a special case. This note shows that the NB QGPML estimators that have been used so far are unappealing when applied to a continuous dependent variable which unit choice is arbitrary, because estimates artificially depend on that choice. A new NB QGPML estimator is introduced to overcome this shortcoming.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号