首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Structured additive regression comprises many semiparametric regression models such as generalized additive (mixed) models, geoadditive models, and hazard regression models within a unified framework. In a Bayesian formulation, non-parametric functions, spatial effects and further model components are specified in terms of multivariate Gaussian priors for high-dimensional vectors of regression coefficients. For several model terms, such as penalized splines or Markov random fields, these Gaussian prior distributions involve rank-deficient precision matrices, yielding partially improper priors. Moreover, hyperpriors for the variances (corresponding to inverse smoothing parameters) may also be specified as improper, e.g. corresponding to Jeffreys prior or a flat prior for the standard deviation. Hence, propriety of the joint posterior is a crucial issue for full Bayesian inference in particular if based on Markov chain Monte Carlo simulations. We establish theoretical results providing sufficient (and sometimes necessary) conditions for propriety and provide empirical evidence through several accompanying simulation studies.  相似文献   

2.
空间计量模型的选择是空间计量建模的一个重要组成部分,也是空间计量模型实证分析的关键步骤。本文对空间计量模型选择中的Moran指数检验、LM检验、似然函数、三大信息准则、贝叶斯后验概率、马尔可夫链蒙特卡罗方法做了详细的理论分析。并在此基础之上,通过Matlab编程进行模拟分析,结果表明:在扩充的空间计量模型族中进行模型选择时,基于OLS残差的Moran指数与LM检验均存在较大的局限性,对数似然值最大原则缺少区分度,LM检验只针对SEM和SAR模型的区分有效,信息准则对大多数模型有效,但是也会出现误选。而当给出恰当的M-H算法时,充分利用了似然函数和先验信息的MCMC方法,具有更高的检验效度,特别是在较大的样本条件下得到了完全准确的判断,且对不同阶空间邻接矩阵的空间计量模型的选择也非常有效。  相似文献   

3.
Summary.  The structural theoretical framework for the analysis of duration of unemployment has been the optimal job search model. Recent advances in computational techniques in Bayesian inference now facilitate the analysis of incomplete data sets and the recovery of structural model parameters. The paper uses these methods on a UK data set of the long-term unemployed to illustrate how the optimal job search model can be adapted to model the effects of an active labour market policy. Without such an adaptation our conclusion is that the simple optimal job search model may not fit empirical unemployment data and could thus lead to a misspecified econometric model and incorrect parameter estimates.  相似文献   

4.
Many of the recently developed alternative econometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implict preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

5.
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test.  相似文献   

6.
Reply     
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

7.
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

8.
We present a new statistical framework for landmark ?>curve-based image registration and surface reconstruction. The proposed method first elastically aligns geometric features (continuous, parameterized curves) to compute local deformations, and then uses a Gaussian random field model to estimate the full deformation vector field as a spatial stochastic process on the entire surface or image domain. The statistical estimation is performed using two different methods: maximum likelihood and Bayesian inference via Markov Chain Monte Carlo sampling. The resulting deformations accurately match corresponding curve regions while also being sufficiently smooth over the entire domain. We present several qualitative and quantitative evaluations of the proposed method on both synthetic and real data. We apply our approach to two different tasks on real data: (1) multimodal medical image registration, and (2) anatomical and pottery surface reconstruction.  相似文献   

9.
Abstract.  Mixed model based approaches for semiparametric regression have gained much interest in recent years, both in theory and application. They provide a unified and modular framework for penalized likelihood and closely related empirical Bayes inference. In this article, we develop mixed model methodology for a broad class of Cox-type hazard regression models where the usual linear predictor is generalized to a geoadditive predictor incorporating non-parametric terms for the (log-)baseline hazard rate, time-varying coefficients and non-linear effects of continuous covariates, a spatial component, and additional cluster-specific frailties. Non-linear and time-varying effects are modelled through penalized splines, while spatial components are treated as correlated random effects following either a Markov random field or a stationary Gaussian random field prior. Generalizing existing mixed model methodology, inference is derived using penalized likelihood for regression coefficients and (approximate) marginal likelihood for smoothing parameters. In a simulation we study the performance of the proposed method, in particular comparing it with its fully Bayesian counterpart using Markov chain Monte Carlo methodology, and complement the results by some asymptotic considerations. As an application, we analyse leukaemia survival data from northwest England.  相似文献   

10.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.  相似文献   

11.
Necessary and sufficient conditions for weak and strong convergence are derived for the weighted version of a general process under random censoring. To be more explicit, this means that for this process complete analogues are obtained of the Chibisov-O'Reilly theorem, the Lai-Wellner Glivenko-Cantelli theorem, and the James law of the iterated logarithm for the empirical process. The process contains as special cases the so-called basic martingale, the empirical cumulative hazard process, and the product-limit process. As a tool we derive a Kiefer-process-type approximation of our process, which may be of independent interest.  相似文献   

12.
A random field displays long (resp. short) memory when its covariance function is absolutely non-summable (resp. summable), or alternatively when its spectral density (spectrum) is unbounded (resp. bounded) at some frequencies. Drawing on the spectrum approach, this paper characterizes both short and long memory features in the spatial autoregressive model. The data generating process is presented as a sequence of spatial autoregressive micro-relationships. The study elaborates the exact conditions under which short and long memories emerge for micro-relationships and for the aggregated field as well. To study the spectrum of the aggregated field, we develop a new general concept referred to as the ‘root order of a function’. This concept might be usefully applied in studying the convergence of some special integrals. We illustrate our findings with simulation experiments and an empirical application based on Gross Domestic Product data for 100 countries spanning over 1960–2004.  相似文献   

13.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

14.
沈利生 《统计研究》2008,25(1):21-24
内容提要:同比价格指数与环比价格指数间有着重大的区别,同比指数反映了年度间的价格变化,环比指数反映了月(季)度间的价格变化。实证检验表明,同比价格指数序列与环比价格指数序列有着不同的单整阶数,两者之间不能互相替代。在利用多个宏观经济变量构建月度(或季度)经济计量模型时,不仅要注意各序列的单整性,而且要注意各序列的一致性。  相似文献   

15.
The authors consider the problem of searching for activation in brain images obtained from functional magnetic resonance imaging and the corresponding functional signal detection problem. They develop a Bayesian procedure to detect signals existing within noisy images when the image is modeled as a scale space random field. Their procedure is based on the Radon‐Nikodym derivative, which is used as the Bayes factor for assessing the point null hypothesis of no signal. They apply their method to data from the Montreal Neurological Institute.  相似文献   

16.
空间回归模型选择的反思   总被引:1,自引:0,他引:1  
空间计量经济学存在两种最基本的模型:空间滞后模型和空间误差模型,这里旨在重新思考和探讨这两种空间回归模型的选择,结论为:Moran’s I指数可以用来判断回归模型后的残差是否存在空间依赖性;在实证分析中,采用拉格朗日乘子检验判断两种模型优劣是最常见的做法。然而,该检验仅仅是基于统计推断而忽略了理论基础,因此,可能导致选择错误的模型;在实证分析中,空间误差模型经常被选择性遗忘,而该模型的适用性较空间滞后模型更为广泛;实证分析大多缺乏空间回归模型设定的探讨,Anselin提出三个统计量,并且,如果模型设定正确,应该遵从Wald统计量>Log likelihood统计量>LM统计量的排列顺序。  相似文献   

17.
《Econometric Reviews》2007,26(2):173-185
Sungbae An and Frank Schorfheide have provided an excellent review of the main elements of Bayesian inference in Dynamic Stochastic General Equilibrium (DSGE) models. Bayesian methods have, for reasons clearly outlined in the paper, a very natural role to play in DSGE analysis, and the appeal of the Bayesian paradigm is indeed strongly evidenced by the flood of empirical applications in the area over the last couple of years. We expect their paper to be the natural starting point for applied economists interested in learning about Bayesian techniques for analyzing DSGE models, and as such the paper is likely to have a strong influence on what will be considered best practice for estimating DSGE models.

The authors have, for good reasons, chosen a stylized six-equation model to present the methodology. We shall use here the large-scale model in Adolfson et al. (2005), henceforth ALLV, to illustrate a few econometric problems which we have found to be especially important as the size of the model increases. The model in ALLV is an open economy extension of the closed economy model in Christiano et al. (2005). It consists of 25 log-linearized equations, which can be written as a state space representation with 60 state variables, many of them unobserved. Fifteen observed unfiltered time series are used to estimate 51 structural parameters. An additional complication compared to the model in An and Schorfheide's paper is that some of the coefficients in the measurement equation are non-linear functions of the structural parameters. The model is currently the main vehicle for policy analysis at Sveriges Riksbank (Central Bank of Sweden) and similar models are being developed in many other policy institutions, which testifies to the model's practical relevance. The version considered here is estimated on Euro area data over the period 1980Q1-2002Q4. We refer to ALLV for details.  相似文献   

18.
Markov Random Fields with Higher-order Interactions   总被引:5,自引:0,他引:5  
Discrete-state Markov random fields on regular arrays have played a significant role in spatial statistics and image analysis. For example, they are used to represent objects against background in computer vision and pixel-based classification of a region into different crop types in remote sensing. Convenience has generally favoured formulations that involve only pairwise interactions. Such models are in themselves unrealistic and, although they often perform surprisingly well in tasks such as the restoration of degraded images, they are unsatisfactory for many other purposes. In this paper, we consider particular forms of Markov random fields that involve higher-order interactions and therefore are better able to represent the large-scale properties of typical spatial scenes. Interpretations of the parameters are given and realizations from a variety of models are produced via Markov chain Monte Carlo. Potential applications are illustrated in two examples. The first concerns Bayesian image analysis and confirms that pairwise-interaction priors may perform very poorly for image functionals such as number of objects, even when restoration apparently works well. The second example describes a model for a geological dataset and obtains maximum-likelihood parameter estimates using Markov chain Monte Carlo. Despite the complexity of the formulation, realizations of the estimated model suggest that the representation is quite realistic.  相似文献   

19.
Non-parametric Bayesian Estimation of a Spatial Poisson Intensity   总被引:5,自引:0,他引:5  
A method introduced by Arjas & Gasbarra (1994) and later modified by Arjas & Heikkinen (1997) for the non-parametric Bayesian estimation of an intensity on the real line is generalized to cover spatial processes. The method is based on a model approximation where the approximating intensities have the structure of a piecewise constant function. Random step functions on the plane are generated using Voronoi tessellations of random point patterns. Smoothing between nearby intensity values is applied by means of a Markov random field prior in the spirit of Bayesian image analysis. The performance of the method is illustrated in examples with both real and simulated data.  相似文献   

20.
The paper describes Bayesian analysis for agricultural field experiments, a topic that has received very little previous attention, despite a vast frequentist literature. Adoption of the Bayesian paradigm simplifies the interpretation of the results, especially in ranking and selection. Also, complex formulations can be analysed with comparative ease, by using Markov chain Monte Carlo methods. A key ingredient in the approach is the need for spatial representations of the unobserved fertility patterns. This is discussed in detail. Problems caused by outliers and by jumps in fertility are tackled via hierarchical t formulations that may find use in other contexts. The paper includes three analyses of variety trials for yield and one example involving binary data; none is entirely straightforward. Some comparisons with frequentist analyses are made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号