首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
《Econometric Reviews》2013,32(2):203-215
Abstract

Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two‐stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

2.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

3.
Summary This special issue of the Journal of the German Statistical Society presents 14 papers with surveys on the development and new topics in econometrics. The articles aim to demonstrate how German econometricians see the discipline from their specific view. They briefly describe the main strands and emphasize some recent methods.  相似文献   

4.
A commonly used procedure in a wide class of empirical applications is to impute unobserved regressors, such as expectations, from an auxiliary econometric model. This two-step (T-S) procedure fails to account for the fact that imputed regressors are measured with sampling error, so hypothesis tests based on the estimated covariance matrix of the second-step estimator are biased, even in large samples. We present a simple yet general method of calculating asymptotically correct standard errors in T-S models. The procedure may be applied even when joint estimation methods, such as full information maximum likelihood, are inappropriate or computationally infeasible. We present two examples from recent empirical literature in which these corrections have a major impact on hypothesis testing.  相似文献   

5.
Group testing has its origin in the identification of syphilis in the U.S. army during World War II. Much of the theoretical framework of group testing was developed starting in the late 1950s, with continued work into the 1990s. Recently, with the advent of new laboratory and genetic technologies, there has been an increasing interest in group testing designs for cost saving purposes. In this article, we compare different nested designs, including Dorfman, Sterrett and an optimal nested procedure obtained through dynamic programming. To elucidate these comparisons, we develop closed-form expressions for the optimal Sterrett procedure and provide a concise review of the prior literature for other commonly used procedures. We consider designs where the prevalence of disease is known as well as investigate the robustness of these procedures, when it is incorrectly assumed. This article provides a technical presentation that will be of interest to researchers as well as from a pedagogical perspective. Supplementary material for this article is available online.  相似文献   

6.
Introductory statistical inference texts and courses treat the point estimation, hypothesis testing, and interval estimation problems separately, with primary emphasis on large-sample approximations. Here, I present an alternative approach to teaching this course, built around p-values, emphasizing provably valid inference for all sample sizes. Details about computation and marginalization are also provided, with several illustrative examples, along with a course outline. Supplementary materials for this article are available online.  相似文献   

7.
We develop fast mean field variational methodology for Bayesian heteroscedastic semiparametric regression, in which both the mean and variance are smooth, but otherwise arbitrary, functions of the predictors. Our resulting algorithms are purely algebraic, devoid of numerical integration and Monte Carlo sampling. The locality property of mean field variational Bayes implies that the methodology also applies to larger models possessing variance function components. Simulation studies indicate good to excellent accuracy, and considerable time savings compared with Markov chain Monte Carlo. We also provide some illustrations from applications.  相似文献   

8.
ABSTRACT

Such is the grip of formal methods of statistical inference—that is, frequentist methods for generalizing from sample to population in enumerative studies—in the drawing of scientific inferences that the two are routinely deemed equivalent in the social, management, and biomedical sciences. This, despite the fact that legitimate employment of said methods is difficult to implement on practical grounds alone. But supposing the adoption of these procedures were simple does not get us far; crucially, methods of formal statistical inference are ill-suited to the analysis of much scientific data. Even findings from the claimed gold standard for examination by the latter, randomized controlled trials, can be problematic.

Scientific inference is a far broader concept than statistical inference. Its authority derives from the accumulation, over an extensive period of time, of both theoretical and empirical knowledge that has won the (provisional) acceptance of the scholarly community. A major focus of scientific inference can be viewed as the pursuit of significant sameness, meaning replicable and empirically generalizable results among phenomena. Regrettably, the obsession with users of statistical inference to report significant differences in data sets actively thwarts cumulative knowledge development.

The manifold problems surrounding the implementation and usefulness of formal methods of statistical inference in advancing science do not speak well of much teaching in methods/statistics classes. Serious reflection on statistics' role in producing viable knowledge is needed. Commendably, the American Statistical Association is committed to addressing this challenge, as further witnessed in this special online, open access issue of The American Statistician.  相似文献   

9.
Recent developments in sample survey theory include the following topics: foundational aspects of inference, resampling methods for variance and confidence interval estimation, imputation for nonresponse and analysis of complex survey data. An overview and appraisal of some of these developments are presented.  相似文献   

10.
Full likelihood-based inference for modern population genetics data presents methodological and computational challenges. The problem is of considerable practical importance and has attracted recent attention, with the development of algorithms based on importance sampling (IS) and Markov chain Monte Carlo (MCMC) sampling. Here we introduce a new IS algorithm. The optimal proposal distribution for these problems can be characterized, and we exploit a detailed analysis of genealogical processes to develop a practicable approximation to it. We compare the new method with existing algorithms on a variety of genetic examples. Our approach substantially outperforms existing IS algorithms, with efficiency typically improved by several orders of magnitude. The new method also compares favourably with existing MCMC methods in some problems, and less favourably in others, suggesting that both IS and MCMC methods have a continuing role to play in this area. We offer insights into the relative advantages of each approach, and we discuss diagnostics in the IS framework.  相似文献   

11.
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations.  相似文献   

12.
The design and analysis of experiments to estimate heritability when data are available on both parents and progeny and the offspring have a hierarchical structure is considered. The method of analysis is related to a multivariate analysis of variance and to weighted least squares. It is shown that genetical theory gives a simple interpretation of both maximum likelihood (ML) and Rao's minimum norm quadratic unbiased (MINQUE) methods of estimation of variance components in unbalanced designs.  相似文献   

13.
 当误差项不服从独立同分布时,利用Moran’s I统计量的渐近检验,无法有效判断空间经济计量滞后模型2SLS估计残差间存在空间关系与否。本文采用两种基于残差的Bootstrap方法,诊断空间经济计量滞后模型残差中的空间相关关系。大量Monte Carlo模拟结果显示,从功效角度看,无论误差项服从独立同分布与否,与渐近检验相比,Bootstrap Moran检验都具有更好的有限样本性质,能够更有效地进行空间相关性检验。尤其是,在样本量较小和空间衔接密度较高情况下,Bootstrap Moran检验的功效显著大于渐近检验。  相似文献   

14.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   

15.
We observe s Independent samples, from unknown continuous distributions. The problem is to test the hypothesis that all the distributions are identical. The distribution of the numbers of observations from s-1 of the samples which fall in cells whose Boundaries are selected order statistics of the remaining sample, the number of cells increasing gradually with the sample sizes, is investigated. It is shown that under the null hypothesis and nearDy alternatives, as the sample sizes Increase these numbers of observations can be considered to be slightly rounded off normal random variables, the amount rounded off decreasing as sample sizes increase. Using these results, various tests of the hypothesis can be constructed and analyzed.  相似文献   

16.
Abstract

We propose a simple procedure based on an existing “debiased” l1-regularized method for inference of the average partial effects (APEs) in approximately sparse probit and fractional probit models with panel data, where the number of time periods is fixed and small relative to the number of cross-sectional observations. Our method is computationally simple and does not suffer from the incidental parameters problems that come from attempting to estimate as a parameter the unobserved heterogeneity for each cross-sectional unit. Furthermore, it is robust to arbitrary serial dependence in underlying idiosyncratic errors. Our theoretical results illustrate that inference concerning APEs is more challenging than inference about fixed and low-dimensional parameters, as the former concerns deriving the asymptotic normality for sample averages of linear functions of a potentially large set of components in our estimator when a series approximation for the conditional mean of the unobserved heterogeneity is considered. Insights on the applicability and implications of other existing Lasso-based inference procedures for our problem are provided. We apply the debiasing method to estimate the effects of spending on test pass rates. Our results show that spending has a positive and statistically significant average partial effect; moreover, the effect is comparable to found using standard parametric methods.  相似文献   

17.
This paper proposes the use of the integrated likelihood for inference on the mean effect in small‐sample meta‐analysis for continuous outcomes. The method eliminates the nuisance parameters given by variance components through integration with respect to a suitable weight function, with no need to estimate them. The integrated likelihood approach takes into proper account the estimation uncertainty of within‐study variances, thus providing confidence intervals with empirical coverage closer to nominal levels than standard likelihood methods. The improvement is remarkable when either (i) the number of studies is small to moderate or (ii) the small sample size of the studies does not allow to consider the within‐study variances as known, as common in applications. Moreover, the use of the integrated likelihood avoids numerical pitfalls related to the estimation of variance components which can affect alternative likelihood approaches. The proposed methodology is illustrated via simulation and applied to a meta‐analysis study in nutritional science.  相似文献   

18.
In designing a study to compare two lifetime distributions, decisions are required about the study size, the proportion of observations in each group and the length of follow-up period. These aspects of study design are examined using a Bayesian approach in which the expected consequences of a particular choice of design are evaluated by the expected gain in infornlation.  相似文献   

19.
抽样调查应用与理论中的若干前沿问题   总被引:5,自引:2,他引:5  
综述了近三四十年来抽样调查应用与理论的若干前沿热门问题,着重讨论抽样设计与推断方法、非抽样误差分析及小域估计等三个方向,同时指出中国抽样调查的实践与理论研究所面临的主要问题.  相似文献   

20.
The likelihood function is often used for parameter estimation. Its use, however, may cause difficulties in specific situations. In order to circumvent these difficulties, we propose a parameter estimation method based on the replacement of the likelihood in the formula of the Bayesian posterior distribution by a function which depends on a contrast measuring the discrepancy between observed data and a parametric model. The properties of the contrast-based (CB) posterior distribution are studied to understand what the consequences of incorporating a contrast in the Bayes formula are. We show that the CB-posterior distribution can be used to make frequentist inference and to assess the asymptotic variance matrix of the estimator with limited analytical calculations compared to the classical contrast approach. Even if the primary focus of this paper is on frequentist estimation, it is shown that for specific contrasts the CB-posterior distribution can be used to make inference in the Bayesian way.The method was used to estimate the parameters of a variogram (simulated data), a Markovian model (simulated data) and a cylinder-based autosimilar model describing soil roughness (real data). Even if the method is presented in the spatial statistics perspective, it can be applied to non-spatial data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号