首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Econometric Reviews》2013,32(2):203-215
Abstract

Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two‐stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

2.
A Bayesian least squares approach is taken here to estimate certain parameters in generalized linear models for dichotomous response data. The method requires that only first and second moments of the probability distribution representing prior information be specified* Examples are presented to illustrate situations having direct estimates as well as those which require approximate or iterative solutions.  相似文献   

3.
B   rdal   eno  lu 《Journal of applied statistics》2005,32(10):1051-1066
It is well known that the least squares method is optimal only if the error distributions are normally distributed. However, in practice, non-normal distributions are more prevalent. If the error terms have a non-normal distribution, then the efficiency of least squares estimates and tests is very low. In this paper, we consider the 2k factorial design when the distribution of error terms are Weibull W(p,σ). From the methodology of modified likelihood, we develop robust and efficient estimators for the parameters in 2k factorial design. F statistics based on modified maximum likelihood estimators (MMLE) for testing the main effects and interaction are defined. They are shown to have high powers and better robustness properties as compared to the normal theory solutions. A real data set is analysed.  相似文献   

4.
This paper presents a methodology based on transforming estimation methods in optimization problems in order to incorporate in a natural way some constraints that contain extra information not considered by standard estimation methods, with the aim of improving the quality of the parameter estimates. We include here three types of such information: bounds for the cumulative distribution function, bounds for the quantiles, and any restrictions on the parameters such as those imposed by the support of the random variable under consideration. The method is quite general and can be applied to many estimation methods such as the maximum likelihood (ML), the method of moments (MOM), the least squares, the least absolute values, and the minimax methods. The performances of the obtained estimates from several families of distributions are investigated for the ML and the MOM, using simulations. The simulation results show that for small sample sizes important gains can be achieved with respect to the case where the above information is ignored. In addition, we discuss sensitivity analysis methods for assessing the influence of observations on the proposed estimators. The method applies to both univariate and multivariate data.  相似文献   

5.
In this paper, we study some mathematical properties of the beta Weibull (BW) distribution, which is a quite flexible model in analysing positive data. It contains the Weibull, exponentiated exponential, exponentiated Weibull and beta exponential distributions as special sub-models. We demonstrate that the BW density can be expressed as a mixture of Weibull densities. We provide their moments and two closed-form expressions for their moment-generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and two entropies. The density of the BW-order statistics is a mixture of Weibull densities and two closed-form expressions are derived for their moments. The estimation of the parameters is approached by two methods: moments and maximum likelihood. We compare the performances of the estimates obtained from both the methods by simulation. The expected information matrix is derived. For the first time, we introduce a log-BW regression model to analyse censored data. The usefulness of the BW distribution is illustrated in the analysis of three real data sets.  相似文献   

6.
We conducted confirmatory factor analysis (CFA) of responses (N=803) to a self‐reported measure of optimism, using full‐information estimation via adaptive quadrature (AQ), an alternative estimation method for ordinal data. We evaluated AQ results in terms of the number of iterations required to achieve convergence, model fit, parameter estimates, standard errors (SE), and statistical significance, across four link‐functions (logit, probit, log‐log, complimentary log‐log) using 3–10 and 20 quadrature points. We compared AQ results with those obtained using maximum likelihood, robust maximum likelihood, and robust diagonally weighted least‐squares estimation. Compared to the other two link‐functions, logit and probit not only produced fit statistics, parameters estimates, SEs, and levels of significance that varied less across numbers of quadrature points, but also fitted the data better and provided larger completely standardised loadings than did maximum likelihood and diagonally weighted least‐squares. Our findings demonstrate the viability of using full‐information AQ to estimate CFA models with real‐world ordinal data.  相似文献   

7.
Abstract

Statistical distributions are very useful in describing and predicting real world phenomena. In many applied areas there is a clear need for the extended forms of the well-known distributions. Generally, the new distributions are more flexible to model real data that present a high degree of skewness and kurtosis. The choice of the best-suited statistical distribution for modeling data is very important.

In this article, we proposed an extended generalized Gompertz (EGGo) family of EGGo. Certain statistical properties of EGGo family including distribution shapes, hazard function, skewness, limit behavior, moments and order statistics are discussed. The flexibility of this family is assessed by its application to real data sets and comparison with other competing distributions. The maximum likelihood equations for estimating the parameters based on real data are given. The performances of the estimators such as maximum likelihood estimators, least squares estimators, weighted least squares estimators, Cramer-von-Mises estimators, Anderson-Darling estimators and right tailed Anderson-Darling estimators are discussed. The likelihood ratio test is derived to illustrate that the EGGo distribution is better than other nested models in fitting data set or not. We use R software for simulation in order to perform applications and test the validity of this model.  相似文献   

8.
We define a parametric proportional odds frailty model to describe lifetime data incorporating heterogeneity between individuals. An unobserved individual random effect, called frailty, acts multiplicatively on the odds of failure by time t. We investigate fitting by maximum likelihood and by least squares. For the latter, the parametric survivor function is fitted to the nonparametric Kaplan–Meier estimate at the observed failure times. Bootstrap standard errors and confidence intervals are obtained for the least squares estimates. The models are applied successfully to simulated data and to two real data sets. Least squares estimates appear to have smaller bias than maximum likelihood.  相似文献   

9.
Abstract

In this article we consider the problem of fitting a five-parameter generalization of the lambda distribution to data given in the form of a grouped frequency table. The estimation of parameters is done by six different procedures: percentiles, moments, probability-weighted moments, minimum Cramér-Von Mises, maximum likelihood, and pseudo least squares. These methods are evaluated and compared using a Monte Carlo study where the parent populations were generalized lambda distribution (GLD) approximations of Normal, Beta, Gamma random variables, and for nine combinations of sample sizes and number of classes. Of the estimators analyzed it is concluded that, although the method of pseudo least squares suffers from a number of limitations, it appears to be the candidate procedure to estimate the parameters of a GLD from grouped data.  相似文献   

10.
In this article, we proposed a new three-parameter probability distribution, called Topp–Leone normal, for modelling increasing failure rate data. The distribution is obtained by using Topp–Leone-X family of distributions with normal as a baseline model. The basic properties including moments, quantile function, stochastic ordering and order statistics are derived here. The estimation of unknown parameters is approached by the method of maximum likelihood, least squares, weighted least squares and maximum product spacings. An extensive simulation study is carried out to compare the long-run performance of the estimators. Applicability of the distribution is illustrated by means of three real data analyses over existing distributions.  相似文献   

11.
Using the ‘grouping vector’ notion and employing a Dirichlet prior to the unknown mixing parameters viz., the unknown mixing proportiona, the Bayee estimates of the mixing proportions in finite mixtures of known distributions are obtained. These estimates are based on the optimal grouping of the sample data. An algorithm is proposed to obtain the optimal grouping of the eample observations when the component densities belong to the family of densities possessing the monotone likelihood ratio property. A numerical study is carried out for the case of mixtures of two normal densities.  相似文献   

12.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

13.
Generalized method of moments (GMM) estimation has become an important unifying framework for inference in econometrics in the last 20 years. It can be thought of as encompassing almost all of the common estimation methods, such as maximum likelihood, ordinary least squares, instrumental variables, and two-stage least squares, and nowadays is an important part of all advanced econometrics textbooks. The GMM approach links nicely to economic theory where orthogonality conditions that can serve as such moment functions often arise from optimizing behavior of agents. Much work has been done on these methods since the seminal article by Hansen, and much remains in progress. This article discusses some of the developments since Hansen's original work. In particular, it focuses on some of the recent work on empirical likelihood–type estimators, which circumvent the need for a first step in which the optimal weight matrix is estimated and have attractive information theoretic interpretations.  相似文献   

14.
A simple least squares method for estimating a change in mean of a sequence of independent random variables is studied. The method first tests for a change in mean based on the regression principle of constrained and unconstrained sums of squares. Conditionally on a decision by this test that a change has occurred, least squares estimates are used to estimate the change point, the initial mean level (prior to the change point) and the change itself. The estimates of the initial level and change are functions of the change point estimate. All estimates are shown to be consistent, and those for the initial level and change are shown to be asymptotically jointly normal. The method performs well for moderately large shifts (one standard deviation or more), but the estimates of the initial level and change are biased in a predictable way for small shifts. The large sample theory is helpful in understanding this problem. The asymptotic distribution of the change point estimator is obtained for local shifts in mean, but the case of non-local shifts appears analytically intractable.  相似文献   

15.
The generalised least squares, maximum likelihood, Bain-Antle 1 and 2, and two mixed methods of estimating the parameters of the two-parameter Weibull distribution are compared. The comparison is made using (a) the observed relative efficiency of parameter estimates and (b) themean squared relative error in estimated quantiles, to summarize the results of 1000 simulated samples of sizes 10 and 25. The results are that: generalised least squares is the best method of estimating the shape parameter ß the best method of estimating the scale parameter a depends onthe size of ß for quantile estimation maximum likelihood is best Bain-Antle 2 is uniformly the worst of the methods.  相似文献   

16.
For the first time, we propose a new distribution so-called the beta generalized Rayleigh distribution that contains as special sub-models some well-known distributions. Expansions for the cumulative distribution and density functions are derived. We obtain explicit expressions for the moments, moment generating function, mean deviations, Bonferroni and Lorenz curves and densities of the order statistics and their moments. We estimate the parameters by maximum likelihood and provide the observed information matrix. The usefulness of the new distribution is illustrated through two real data sets that show that it is quite flexible in analyzing positive data instead of the generalized Rayleigh and Rayleigh distributions.  相似文献   

17.
The problem of comparing, contrasting and combining information from different sets of data is an enduring one in many practical applications of statistics. A specific problem of combining information from different sources arose in integrating information from three different sets of data generated by three different sampling campaigns at the input stage as well as at the output stage of a grey-water treatment process. For each stage, a common process trend function needs to be estimated to describe the input and output material process behaviours. Once the common input and output process models are established, it is required to estimate the efficiency of the grey-water treatment method. A synthesized tool for modelling different sets of process data is created by assembling and organizing a number of existing techniques: (i) a mixed model of fixed and random effects, extended to allow for a nonlinear fixed effect, (ii) variogram modelling, a geostatistical technique, (iii) a weighted least squares regression embedded in an iterative maximum-likelihood technique to handle linear/nonlinear fixed and random effects and (iv) a formulation of a transfer-function model for the input and output processes together with a corresponding nonlinear maximum-likelihood method for estimation of a transfer function. The synthesized tool is demonstrated, in a new case study, to contrast and combine information from connected process models and to determine the change in one quality characteristic, namely pH, of the input and output materials of a grey-water filtering process.  相似文献   

18.
Abstract.  In forestry the problem of estimating areas is central. This paper addresses area estimation through fitting of a polygon to observed coordinate data. Coordinates of corners and points along the sides of a simple closed polygon are measured with independent random errors. This paper focuses on procedures to adjust the coordinates for estimation of the polygon and its area. Different new techniques that consider different amounts of prior information are described and compared. The different techniques use restricted least squares, maximum likelihood and the expectation maximization algorithm. In a simulation study it is shown that the root mean square errors of the estimates are decreased when coordinates are adjusted before estimation. Minor further improvement is achieved by using prior information about the order and the distribution of the points along the sides of the polygon. This paper has its origin in forestry but there are also other applications.  相似文献   

19.
As an applicable and flexible lifetime model, the two-parameter generalized half-normal (GHN) distribution has been received wide attention in the field of reliability analysis and lifetime study. In this paper maximum likelihood estimates of the model parameters are discussed and we also proposed corresponding bias-corrected estimates. Unweighted and weighted least squares estimates for the parameters of the GHN distribution are also presented for comparison purpose. Moreover, the likelihood ratio test is provided as complementary. Simulation study and illustrative examples are provided to compare the performance of the proposed methods.  相似文献   

20.
Kinetic models are used extensively in science, engineering, and medicine. Mathematically, they are a set of coupled differential equations including a source function, otherwise known as an input function. We investigate whether parametric modeling of a noisy input function offers any benefit over the non-parametric input function in estimating kinetic parameters. Our analysis includes four formulations of Bayesian posteriors of model parameters where noise is taken into account in the likelihood functions. Posteriors are determined numerically with a Markov chain Monte Carlo simulation. We compare point estimates derived from the posteriors to a weighted non-linear least squares estimate. Results imply that parametric modeling of the input function does not improve the accuracy of model parameters, even with perfect knowledge of the functional form. Posteriors are validated using an unconventional utilization of the χ2-test. We demonstrate that if the noise in the input function is not taken into account, the resulting posteriors are incorrect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号