首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
It is well-known that under fairly conditions linear regression becomes a powerful statistical tool. In practice, however, some of these conditions are usually not satisfied and regression models become ill-posed, implying that the application of traditional estimation methods may lead to non-unique or highly unstable solutions. Addressing this issue, in this paper a new class of maximum entropy estimators suitable for dealing with ill-posed models, namely for the estimation of regression models with small samples sizes affected by collinearity and outliers, is introduced. The performance of the new estimators is illustrated through several simulation studies.  相似文献   

3.
A compendium to information theory in economics and econometrics   总被引:5,自引:0,他引:5  
  相似文献   

4.
We address the problem of optimally forecasting a binary variable for a heterogeneous group of decision makers facing various (binary) decision problems that are tied together only by the unknown outcome. A typical example is a weather forecaster who needs to estimate the probability of rain tomorrow and then report it to the public. Given a conditional probability model for the outcome of interest (e.g., logit or probit), we introduce the idea of maximum welfare estimation and derive conditions under which traditional estimators, such as maximum likelihood or (nonlinear) least squares, are asymptotically socially optimal even when the underlying model is misspecified.  相似文献   

5.
ABSTRACT

The maximum likelihood estimates (MLEs) of parameters of a bivariate normal distribution are derived based on progressively Type-II censored data. The asymptotic variances and covariances of the MLEs are derived from the Fisher information matrix. Using the asymptotic normality of MLEs and the asymptotic variances and covariances derived from the Fisher information matrix, interval estimation of the parameters is discussed and the probability coverages of the 90% and 95% confidence intervals for all the parameters are then evaluated by means of Monte Carlo simulations. To improve the probability coverages of the confidence intervals, especially for the correlation coefficient, sample-based Monte Carlo percentage points are determined and the probability coverages of the 90% and 95% confidence intervals obtained using these percentage points are evaluated and shown to be quite satisfactory. Finally, an illustrative example is presented.  相似文献   

6.
The choice of smoothing determines the properties of nonparametric estimates of probability densities. In the discrimination problem, the choice is often tied to loss functions. A framework for the cross–validatory choice of smoothing parameters based on general loss functions is given. Several loss functions are considered as special cases. In particular, a family of loss functions, which is connected to discrimination problems, is directly related to measures of performance used in discrimination. Consistency results are given for a general class of loss functions which comprise this family of discriminant loss functions.  相似文献   

7.
Differential equations have been used in statistics to define functions such as probability densities. But the idea of using differential equation formulations of stochastic models has a much wider scope. The author gives several examples, including simultaneous estimation of a regression model and residual density, monotone smoothing, specification of a link function, differential equation models of data, and smoothing over complicated multidimensional domains. This paper aims to stimulate interest in this approach to functional estimation problems, rather than provide carefully worked out methods.  相似文献   

8.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

9.
The Fisher distribution is frequently used as a model for the probability distribution of directional data, which may be specified either in terms of unit vectors or angular co-ordinates (co-latitude and azimuth). If, in practical situations, only the co-latitudes can be observed, the available data must be regarded as a sample from the corresponding marginal distribution. This paper discusses the estimation by Maximum Likelihood (ML) and the Method of Moments of the two parameters of this marginal Fisher distribution. The moment estimators are generally simpler to compute than the ML estimators, and have high asymptotic efficiency.  相似文献   

10.
In this paper, we consider Marshall–Olkin extended exponential (MOEE) distribution which is capable of modelling various shapes of failure rates and aging criteria. The purpose of this paper is three fold. First, we derive the maximum likelihood estimators of the unknown parameters and the observed the Fisher information matrix from progressively type-II censored data. Next, the Bayes estimates are evaluated by applying Lindley’s approximation method and Markov Chain Monte Carlo method under the squared error loss function. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We also compute 95% asymptotic confidence interval and symmetric credible interval along with the coverage probability. Third, we consider one-sample and two-sample prediction problems based on the observed sample and provide appropriate predictive intervals under classical as well as Bayesian framework. Finally, we analyse a real data set to illustrate the results derived.  相似文献   

11.
ABSTRACT

Estimation of a non linear integral functional of probability distribution density and its derivatives is studied. The truncated plug-in-estimator is taken for the estimation. The integrand function can be unlimited, but it cannot exceed polynomial growth. Consistency of the estimator is proved and the convergence order is established. Aversion of the central limit theorem is proved. As an example an extended Fisher information integral and generalized Shannon's entropy functional are considered.  相似文献   

12.
These Fortran-77 subroutines provide building blocks for Generalized Cross-Validation (GCV) (Craven and Wahba, 1979) calculations in data analysis and data smoothing including ridge regression (Golub, Heath, and Wahba, 1979), thin plate smoothing splines (Wahba and Wendelberger, 1980), deconvolution (Wahba, 1982d), smoothing of generalized linear models (O'sullivan, Yandell and Raynor 1986, Green 1984 and Green and Yandell 1985), and ill-posed problems (Nychka et al., 1984, O'sullivan and Wahba, 1985). We present some of the types of problems for which GCV is a useful method of choosing a smoothing or regularization parameter and we describe the structure of the subroutines.Ridge Regression: A familiar example of a smoothing parameter is the ridge parameter X in the ridge regression problem which we write.  相似文献   

13.
In this paper, the estimation of parameters for a generalized inverted exponential distribution based on the progressively first-failure type-II right-censored sample is studied. An expectation–maximization (EM) algorithm is developed to obtain maximum likelihood estimates of unknown parameters as well as reliability and hazard functions. Using the missing value principle, the Fisher information matrix has been obtained for constructing asymptotic confidence intervals. An exact interval and an exact confidence region for the parameters are also constructed. Bayesian procedures based on Markov Chain Monte Carlo methods have been developed to approximate the posterior distribution of the parameters of interest and in addition to deduce the corresponding credible intervals. The performances of the maximum likelihood and Bayes estimators are compared in terms of their mean-squared errors through the simulation study. Furthermore, Bayes two-sample point and interval predictors are obtained when the future sample is ordinary order statistics. The squared error, linear-exponential and general entropy loss functions have been considered for obtaining the Bayes estimators and predictors. To illustrate the discussed procedures, a set of real data is analyzed.  相似文献   

14.
In this article, we propose an extension of the Maxwell distribution, so-called the extended Maxwell distribution. This extension is evolved by using the Maxwell-X family of distributions and Weibull distribution. We study its fundamental properties such as hazard rate, moments, generating functions, skewness, kurtosis, stochastic ordering, conditional moments and moment generating function, hazard rate, mean and variance of the (reversed) residual life, reliability curves, entropy, etc. In estimation viewpoint, the maximum likelihood estimation of the unknown parameters of the distribution and asymptotic confidence intervals are discussed. We also obtain expected Fisher’s information matrix as well as discuss the existence and uniqueness of the maximum likelihood estimators. The EMa distribution and other competing distributions are fitted to two real datasets and it is shown that the distribution is a good competitor to the compared distributions.  相似文献   

15.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

16.
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions, or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This article uses concepts from Bayesian inference and the maximum entropy principle to estimate the prior probability distribution, uncertainty, and correlations of source data when such information is not explicitly provided. In the absence of additional information, an isolated datum is described by a truncated Gaussian distribution, and if an uncertainty estimate is missing, its prior equals the best guess. When the sum of a set of disaggregate data is constrained to match an aggregate datum, it is possible to determine the prior correlations among disaggregate data. If aggregate uncertainty is missing, all prior correlations are positive. If aggregate uncertainty is available, prior correlations can be either all positive, all negative, or a mix of both. An empirical example is presented, which reports relative uncertainties and correlation priors for the County Business Patterns database. In this example, relative uncertainties range from 1% to 80% and 20% of data pairs exhibit correlations below ?0.9 or above 0.9. Supplementary materials for this article are available online.  相似文献   

17.
This article introduces a five-parameter Beta-Dagum distribution from which moments, hazard and entropy, and reliability measures are then derived. These properties show the high flexibility of the said distribution. The maximum likelihood estimators of the Beta-Dagum parameters are examined and the expected Fisher information matrix provided. Next, a simulation study is carried out which shows the good performance of maximum likelihood estimators for finite samples. Finally, the usefulness of the new distribution is illustrated through real data sets.  相似文献   

18.
Undoubtedly, the normal distribution is the most popular distribution in statistics. In this paper, we introduce a natural generalization of the normal distribution and provide a comprehensive treatment of its mathematical properties. We derive expressions for the nth moment, the nth central moment, variance, skewness, kurtosis, mean deviation about the mean, mean deviation about the median, Rényi entropy, Shannon entropy, and the asymptotic distribution of the extreme order statistics. We also discuss estimation by the methods of moments and maximum likelihood and provide an expression for the Fisher information matrix.  相似文献   

19.
A relevant problem in Statistics relates to obtaining conclusions about the shape of the distribution of an experiment from which a sample is drawn. We will consider this problem when the available information from the experimental performance cannot be exactly perceived, but that rather it may be assimilated with fuzzy information (as defined by L.A. Zadeh, and H. Tanaka, T. Okuda and K. Asai).If the hypothetical distribution is completely specified, the extension of the chi-square goodness of fit test on the basis of some concepts in Fuzzy Sets Theory does not entail difficulties. Nevertheless, if the hypothetical distribution involves unknown parameters, the extension of the chi- square goodness of fit test requires the estimation of those parameters from the fuzzy data. The aim of the present paper is to prove that, under certain natural assumptions, the minimum inaccuracy principle of estimation from fuzzy observations (which we have suggested in a previous paper as an operative extension of the maximum likelihood principle) supplies a suitable method for the above requirement.  相似文献   

20.
基于Fisher变换的Bayes判别方法探索   总被引:1,自引:0,他引:1       下载免费PDF全文
判别分析是三大多元统计分析方法之一,在许多领域都有广泛的应用。通常认为距离判别、Fisher判别和Bayes判别是三种不同的判别分析方法,本文的研究表明,距离判别与Bayes判别是两种实质的判别方法,前者实际依据的是百分位点或置信区间,后者实际依据的是概率。而著名的Fisher判别,只是依据方差分析的思想,对判别变量进行线性变换,然后用于距离判别,其实不能算是一种实质的判别方法。本文将Fisher变换与Bayes判别结合起来,即先做Fisher变换,再利用概率最大原则做Bayes判别,得到一种新的判别途径,可进一步提高判别效率。理论与实证分析表明,基于Fisher变换的Bayes判别,适用场合广泛,判别效率最高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号