首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When possible values of a response variable are limited, distributional assumptions about random effects may not be checkable. This may cause a distribution-robust estimator, such as the conditional maximum likelihood estimator to be recommended; however, it does not utilize all the information in the data. We show how, with binary matched pairs, the hierarchical likelihood can be used to recover information from concordant pairs, giving an improvement over the conditional maximum likelihood estimator without losing distribution-robustness.  相似文献   

2.
Stochastic ordering is a useful concept in order restricted inferences. In this paper, we propose a new estimation technique for the parameters in two multinomial populations under stochastic orderings when missing data are present. In comparison with traditional maximum likelihood estimation method, our new method can guarantee the uniqueness of the maximum of the likelihood function. Furthermore, it does not depend on the choice of initial values for the parameters in contrast to the EM algorithm. Finally, we give the asymptotic distributions of the likelihood ratio statistics based on the new estimation method.  相似文献   

3.
This paper examines the formation of maximum likelihood estimates of cell means in analysis of variance problems for cells with missing observations. Methods of estimating the means for missing cells has a long history which includes iterative maximum likelihood techniques, approximation techniques and ad hoc techniques. The use of the EM algorithm to form maximum likelihood estimates has resolved most of the issues associated with this problem. Implementation of the EM algorithm entails specification of a reduced model. As demonstrated in this paper, when there are several missing cells, it is possible to specify a reduced model that results in an unidentifiable likelihood. The EM algorithm in this case does not converge, although the slow divergence may often be mistaken by the unwary as convergence. This paper presents a simple matrix method of determining whether or not the reduced model results in an identifiable likelihood, and consequently in an EM algorithm that converges. We also show the EM algorithm in this case to be equivalent to a method which yields a closed form solution.  相似文献   

4.
An estimator, λ is proposed for the parameter λ of the log-zero-Poisson distribution. While it is not a consistent estimator of λ in the usual statistical sense, it is shown to be quite close to the maximum likelihood estimates for many of the 35 sets of data on which it is tried. Since obtaining maximum likelihood estimates is extremely difficult for this and other contagious distributions, this estimate will act at least as an initial estimate in solving the likelihood equations iteratively. A lesson learned from this experience is that in the area of contagious distributions, variability is so large that attention should be focused directly on the mean squared error and not on consistency or unbiasedness, whether for small samples or for the asymptotic case. Sample sizes for some of the data considered in the paper are in hundreds. The fact that the estimator which is not a consistent estimator of λ is closer to the maximum likeli-hood estimator than the consistent moment estimator shows that the variability is large enough to not permit consistency to materialize even for such large sample sizes usually available in actual practice.  相似文献   

5.
The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero.  相似文献   

6.
Till Massing 《Statistics》2019,53(4):721-752
There is considerable interest in parameter estimation in Lévy models. The maximum likelihood estimator is widely used because under certain conditions it enjoys asymptotic efficiency properties. The toolkit for Lévy processes is the local asymptotic normality which guarantees these conditions. Although the likelihood function is not known explicitly, we prove local asymptotic normality for the location and scale parameters of the Student-Lévy process assuming high-frequency data. In addition, we propose a numerical method to make maximum likelihood estimates feasible based on the Monte Carlo expectation-maximization algorithm. A simulation study verifies the theoretical results.  相似文献   

7.
It is known that the maximum likelihood methods does not provide explicit estimators for the mean and standard deviation of the normal distribution based on Type II censored samples. In this paper we present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We obtain the variances and covariance of these estimators. We also show that these estimators are almost as eficient as the maximum likelihood (ML) estimators and just as eficient as the best linear unbiased (BLU), and the modified maximum likelihood (MML) estimators. Finally, we illustrate this method of estimation by applying it to Gupta's and Darwin's data.  相似文献   

8.
The aim of this paper is to compare the parameters' estimations of the Marshall–Olkin extended Lindley distribution obtained by six estimation methods: maximum likelihood, ordinary least-squares, weighted least-squares, maximum product of spacings, Cramér–von Mises and Anderson–Darling. The bias, root mean-squared error, average absolute difference between the true and estimate distributions' functions and the maximum absolute difference between the true and estimate distributions' functions are used as comparison criteria. Although the maximum product of spacings method is not widely used, the simulation study concludes that it is highly competitive with the maximum likelihood method.  相似文献   

9.
A methodology is presented for gaining insight into properties — such as outlier influence, bias, and width of confidence intervals — of maximum likelihood estimates from nonidentically distributed Gaussian data. The methodology is based on an application of the implicit function theorem to derive an approximation to the maximum likelihood estimator. This approximation, unlike the maximum likelihood estimator, is expressed in closed form and thus it can be used in lieu of costly Monte Carlo simulation to study the properties of the maximum likelihood estimator.  相似文献   

10.
Logistic regression is used by practitioners and researchers in many fields, but is undoubtedly used most frequently in medical and biostatistical applications. Maximum likelihood is generally the estimation method of choice, but we show that maximum likelihood can produce very poor results under certain conditions. Specifically, the poor performance of maximum likelihood in the case of rare events is known and we review research on this topic. We primarily examine the performance of maximum likelihood in the presence of near separation, which has apparently not been studied. Exact logistic regression is the logical alternative to maximum likelihood. We offer a comparison of the two methods of estimation.  相似文献   

11.
This paper considers the three-parameter exponentiated Weibull family under type II censoring. It first graphically illustrates the shape property of the hazard function. Then, it proposes a simple algorithm for computing the maximum likelihood estimator and derives the Fisher information matrix. The latter is represented through a single integral in terms of the hazard function; hence it solves the problem of computational difficulty in constructing inferences for the maximum likelihood estimator. Real data analysis is conducted to illustrate the effect of the censoring rate on the maximum likelihood estimation.  相似文献   

12.
It is common practice to compare the fit of non‐nested models using the Akaike (AIC) or Bayesian (BIC) information criteria. The basis of these criteria is the log‐likelihood evaluated at the maximum likelihood estimates of the unknown parameters. For the general linear model (and the linear mixed model, which is a special case), estimation is usually carried out using residual or restricted maximum likelihood (REML). However, for models with different fixed effects, the residual likelihoods are not comparable and hence information criteria based on the residual likelihood cannot be used. For model selection, it is often suggested that the models are refitted using maximum likelihood to enable the criteria to be used. The first aim of this paper is to highlight that both the AIC and BIC can be used for the general linear model by using the full log‐likelihood evaluated at the REML estimates. The second aim is to provide a derivation of the criteria under REML estimation. This aim is achieved by noting that the full likelihood can be decomposed into a marginal (residual) and conditional likelihood and this decomposition then incorporates aspects of both the fixed effects and variance parameters. Using this decomposition, the appropriate information criteria for model selection of models which differ in their fixed effects specification can be derived. An example is presented to illustrate the results and code is available for analyses using the ASReml‐R package.  相似文献   

13.
Truncated Cauchy distribution with four unknown parameters is considered and derivation and existence of the maximum likelihood estimates is investigated here. We provide a sufficient condition for the maximum likelihood estimate of the scale parameter to be finite, and also show that the condition is necessary for sufficiently large samples. Note that all the moments of the truncated Cauchy distribution exist which makes it much more attractive as a model when compared to the regular Cauchy. We also study, using simulations, the small sample properties of the maximum likelihood estimates.  相似文献   

14.
In survival data analysis, the interval censoring problem has generally been treated via likelihood methods. Because this likelihood is complex, it is often assumed that the censoring mechanisms do not affect the mortality process. The authors specify conditions that ensure the validity of such a simplified likelihood. They prove the equivalence between different characterizations of noninformative censoring and define a constant‐sum condition analogous to the one derived in the context of right censoring. They also prove that when the noninformative or constant‐sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator of the death time distribution function.  相似文献   

15.
One approach to handling incomplete data occasionally encountered in the literature is to treat the missing data as parameters and to maximize the complete-data likelihood over the missing data and parameters. This article points out that although this approach can be useful in particular problems, it is not a generally reliable approach to the analysis of incomplete data. In particular, it does not share the optimal properties of maximum likelihood estimation, except under the trivial asymptotics in which the proportion of missing data goes to zero as the sample size increases.  相似文献   

16.
Evidence presented by Fomby and Guilkey (1983) suggests that Hatanaka's estimator of the coefficients in the lagged dependent variable-serial correlation regression model performs poorly, not because of poor selection of the estimate of the autocorrelation coefficient, but because of the lack of a first observation correction. This study conducts a Monte Carlo investigationof the small sample efficiency gains obtainable from a first observation correction suggested by Harvey (1981). Results presented here indicate that substantial gains result from the first observation correction. However, in comparing Hatanaka's procedure with first observation correction to maximum likelihood search, it appears that ignoring the determinantal term of the full likelihood function causes some loss of small sample efficiency. Thus, when computer costsand programming constraints are not binding, maximum likelihood search is to be recommended. In contrast, users who have access to only rudimentary least squares programs would be well served when using Hatanaka's two-step procedure with first  相似文献   

17.
Summary. The paper considers a rectangular array asymptotic embedding for multistratum data sets, in which both the number of strata and the number of within-stratum replications increase, and at the same rate. It is shown that under this embedding the maximum likelihood estimator is consistent but not efficient owing to a non-zero mean in its asymptotic normal distribution. By using a projection operator on the score function, an adjusted maximum likelihood estimator can be obtained that is asymptotically unbiased and has a variance that attains the Cramér–Rao lower bound. The adjusted maximum likelihood estimator can be viewed as an approximation to the conditional maximum likelihood estimator.  相似文献   

18.
We research an adaptive maximum‐likelihood–type estimation for an ergodic diffusion process where the observation is contaminated by noise. This methodology leads to the asymptotic independence of the estimators for the variance of observation noise, the diffusion parameter, and the drift one of the latent diffusion process. Moreover, it can lessen the computational burden compared to simultaneous maximum likelihood–type estimation. In addition to adaptive estimation, we propose a test to see if noise exists or not and analyze real data as the example such that the data contain observation noise with statistical significance.  相似文献   

19.
Abstract.  In finite mixtures of location–scale distributions, if there is no constraint or penalty on the parameters, then the maximum likelihood estimator does not exist because the likelihood is unbounded. To avoid this problem, we consider a penalized likelihood, where the penalty is a function of the minimum of the ratios of the scale parameters and the sample size. It is shown that the penalized maximum likelihood estimator is strongly consistent. We also analyse the consistency of a penalized maximum likelihood estimator where the penalty is imposed on the scale parameters themselves.  相似文献   

20.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号