首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Small area estimation techniques are becoming increasingly used in survey applications to provide estimates for local areas of interest. The objective of this article is to develop and apply Information Theoretic (IT)-based formulations to estimate small area business and trade statistics. More specifically, we propose a Generalized Maximum Entropy (GME) approach to the problem of small area estimation that exploits auxiliary information relating to other known variables on the population and adjusts for consistency and additivity. The GME formulations, combining information from the sample together with out-of-sample aggregates of the population of interest, can be particularly useful in the context of small area estimation, for both direct and model-based estimators, since they do not require strong distributional assumptions on the disturbances. The performance of the proposed IT formulations is illustrated through real and simulated datasets.  相似文献   

2.
Riccardo Gatto 《Statistics》2013,47(4):409-421
The broad class of generalized von Mises (GvM) circular distributions has certain optimal properties with respect to information theoretic quantities. It is shown that, under constraints on the trigonometric moments, and using the Kullback–Leibler information as the measure, the closest circular distribution to any other is of the GvM form. The lower bounds for the Kullback–Leibler information in this situation are also provided. The same problem is also considered using a modified version of the Kullback–Leibler information. Finally, series expansions are given for the entropy and the normalizing constants of the GvM distribution.  相似文献   

3.
Abstract

Mutual information is a measure for investigating the dependence between two random variables. The copula based estimation of mutual information reduces the complexity because it is depend only on the copula density. We propose two estimators and discuss the asymptotic properties. To compare the performance of the estimators a simulation study is carried out. The methods are illustrated using real data sets.  相似文献   

4.
Statistics and Computing - Post randomization methods are among the most popular disclosure limitation techniques for both categorical and continuous data. In the categorical case, given a...  相似文献   

5.
Brown and Cohen (1974) considered the problem of interval estimation of the common mean of two normal populations based on independent random samples. They showed that if we take the usual confidence interval using the first sample only and centre it around an appropriate combined estimate of the common mean the resulting interval would contain the true value with higher probability. They also gave a sufficient condition which such a point estimate should satisfy. Bhattacharya and Shah (1978) showed that the estimates satisfying this condition are nearly identical to the mean of the first sample. In this paper we obtain a stronger sufficient condition which is satisfied by many point estimates when the size of the second sample exceeds ten.  相似文献   

6.
This paper discusses the classic but still current problem of interval estimation of a binomial proportion. Bootstrap methods are presented for constructing such confidence intervals in a routine, automatic way. Three confidence intervals for a binomial proportion are compared and studied by means of a simulation study, namely: the Wald confidence interval, the Agresti–Coull interval and the bootstrap-t interval. A new confidence interval, the Agresti–Coull interval with bootstrap critical values, is also introduced and its good behaviour related to the average coverage probability is established by means of simulations.  相似文献   

7.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

8.
The paper looks at the problem of comparing two treatments, for a particular population of patients, where one is the current standard treatment and the other a possible alternative under investigation. With limited (finite) financial resources the decision whether to replace one by the other will not be based on health benefits alone. This motivates an economic evaluation of the two competing treatments where the cost of any gain in health benefit is scrutinized; it is whether this cost is acceptable to the relevant authorities which decides whether the new treatment can become the standard. We adopt a Bayesian decision theoretic framework in which a utility function is introduced describing the consequences of making a particular decision when the true state of nature is expressed via an unknown parameter θ (this parameter denotes cost, effectiveness, etc.). The treatment providing the maximum posterior expected utility summarizes the decision rule, expectations taken over the posterior distribution of the parameter θ.  相似文献   

9.
The problem of loss of information due to the discretization of data and its estimate is studied for various measures of information. The results of Ghurye and Johnson (1981) are generalized and supplemented for the Csiszár and Renyi measures of information as well as for Fisher's information matrix.  相似文献   

10.
The incorporation of prior information about θ, where θ is the success probability in a binomial sampling model, is an essential feature of Bayesian statistics. Methodology based on information-theoretic concepts is introduced which (a) quantifies the amount of information provided by the sample data relative to that provided by the prior distribution and (b) allows for a ranking of prior distributions with respect to conservativeness, where conservatism refers to restraint of extraneous information about θ which is embedded in any prior distribution. In effect, the most conservative prior distribution from a specified class (each member o f which carries the available prior information about θ) is that prior distribution within the class over which the likelihood function has the greatest average domination. The most conservative prior distributions from five different families of prior distributions over the interval (0,1) including the beta distribution are determined and compared for three situations: (1) no prior estimate of θ is available, (2) a prior point estimate or θ is available, and (3) a prior interval estimate of θ is available. The results of the comparisons not only advocate the use of the beta prior distribution in binomial sampling but also indicate which particular one to use in the three aforementioned situations.  相似文献   

11.
Difference type estimators use auxiliary information based on an auxiliary parameter (specifically the parameter of interest), associated with the auxiliary variable. In practice, however, several parameters for auxiliary variables are available. This paper discusses how such estimators can be modified to improve the usual methods if information related to other parameters associated with an auxiliary variable or variables is available. Some applications estimating several such parameters are described. A proper set of simulation-based comparisons is made. Research partially supported by MCYT (Spain) contract n. BFM2001-3190  相似文献   

12.
Here we define an information improvement generating function whose derivative at point 1 gives Theil's measure of information improvement which has wide applications in Economics. It contains Guiasu and Reischeir's relative information generating function and Golomb's information generating function as particular cases. Simple expressions for important discrete distributions have been obtained. It has also been shown that the information improvement generating function suggests a new information indicator as the standard deviation of the variation of information.  相似文献   

13.
This paper is concerned with interval estimation for the breakpoint parameter in segmented regression. We present score‐type confidence intervals derived from the score statistic itself and from the recently proposed gradient statistic. Due to lack of regularity conditions of the score, non‐smoothness and non‐monotonicity, naive application of the score‐based statistics is unfeasible and we propose to exploit the smoothed score obtained via induced smoothing. We compare our proposals with the traditional methods based on the Wald and the likelihood ratio statistics via simulations and an analysis of a real dataset: results show that the smoothed score‐like statistics perform in practice somewhat better than competitors, even when the model is not correctly specified.  相似文献   

14.
The choice of the summary statistics in approximate maximum likelihood is often a crucial issue. We develop a criterion for choosing the most effective summary statistic and then focus on the empirical characteristic function. In the iid setting, the approximating posterior distribution converges to the approximate distribution of the parameters conditional upon the empirical characteristic function. Simulation experiments suggest that the method is often preferable to numerical maximum likelihood. In a time-series framework, no optimality result can be proved, but the simulations indicate that the method is effective in small samples.  相似文献   

15.
This paper proposes a GMM estimation framework for the SAR model in a system of simultaneous equations with heteroskedastic disturbances. Besides linear moment conditions, the proposed GMM estimator also utilizes quadratic moment conditions based on the covariance structure of model disturbances within and across equations. Compared with the QML approach, the GMM estimator is easier to implement and robust under heteroskedasticity of unknown form. We derive the heteroskedasticity-robust standard error for the GMM estimator. Monte Carlo experiments show that the proposed GMM estimator performs well in finite samples.  相似文献   

16.
In this study, we evaluate several forms of both Akaike-type and Information Complexity (ICOMP)-type information criteria, in the context of selecting an optimal subset least squares ratio (LSR) regression model. Our simulation studies are designed to mimic many characteristics present in real data – heavy tails, multicollinearity, redundant variables, and completely unnecessary variables. Our findings are that LSR in conjunction with one of the ICOMP criteria is very good at selecting the true model. Finally, we apply these methods to the familiar body fat data set.  相似文献   

17.
When an appropriate parametric model and a prior distribution of its parameters are given to describe clinical time courses of a dynamic biological process, Bayesian approaches allow us to estimate the entire profiles from a few or even a single observation per subject. The goodness of the estimation depends on the measurement points at which the observations were made. The number of measurement points per subject is generally limited to one or two. The limited measurement points have to be selected carefully. This paper proposes an approach to the selection of the optimum measurement point for Bayesian estimations of clinical time courses. The selection is made among given candidates, based on the goodness of estimation evaluated by the Kullback-Leibler information. This information measures the discrepancy of an estimated time course from the true one specified by a given appropriate model. The proposed approach is applied to a pharmacokinetic analysis, which is a typical clinical example where the selection is required. The results of the present study strongly suggest that the proposed approach is applicable to pharmacokinetic data and has a wide range of clinical applications.  相似文献   

18.
We use an information theoretic criterion proposed by Zhao, Krishnaiah and Bai (1986) to detect the number of outliers in a data set. We consider univariable mean-slippage and dispersion-slippage outlier structure of the observations. Multivariate generalizations and the consistency of the estimates are also considered. Numerical examples are presented in tables.  相似文献   

19.
A procedure is illustrated to incorporate prior information in the ridge regression model. Unbiased ridge estimators with prior information are defined and a robust estimate of the ridge parameter k is proposed.  相似文献   

20.
There is much literature on statistical inference for distribution under missing data, but surprisingly very little previous attention has been paid to missing data in the context of estimating distribution with auxiliary information. In this article, the auxiliary information with missing data is proposed. We use Zhou, Wan and Wang's method (2008) to mitigate the effects of missing data through a reformulation of the estimating equations, imputed through a semi-parametric procedure. Whence we can estimate distribution and the τth quantile of the distribution by taking auxiliary information into account. Asymptotic properties of the distribution estimator and corresponding sample quantile are derived and analyzed. The distribution estimators based on our method are found to significantly outperform the corresponding estimators without auxiliary information. Some simulation studies are conducted to illustrate the finite sample performance of the proposed estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号