首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Econometric Reviews》2007,26(2):173-185
Sungbae An and Frank Schorfheide have provided an excellent review of the main elements of Bayesian inference in Dynamic Stochastic General Equilibrium (DSGE) models. Bayesian methods have, for reasons clearly outlined in the paper, a very natural role to play in DSGE analysis, and the appeal of the Bayesian paradigm is indeed strongly evidenced by the flood of empirical applications in the area over the last couple of years. We expect their paper to be the natural starting point for applied economists interested in learning about Bayesian techniques for analyzing DSGE models, and as such the paper is likely to have a strong influence on what will be considered best practice for estimating DSGE models.

The authors have, for good reasons, chosen a stylized six-equation model to present the methodology. We shall use here the large-scale model in Adolfson et al. (2005), henceforth ALLV, to illustrate a few econometric problems which we have found to be especially important as the size of the model increases. The model in ALLV is an open economy extension of the closed economy model in Christiano et al. (2005). It consists of 25 log-linearized equations, which can be written as a state space representation with 60 state variables, many of them unobserved. Fifteen observed unfiltered time series are used to estimate 51 structural parameters. An additional complication compared to the model in An and Schorfheide's paper is that some of the coefficients in the measurement equation are non-linear functions of the structural parameters. The model is currently the main vehicle for policy analysis at Sveriges Riksbank (Central Bank of Sweden) and similar models are being developed in many other policy institutions, which testifies to the model's practical relevance. The version considered here is estimated on Euro area data over the period 1980Q1-2002Q4. We refer to ALLV for details.  相似文献   

2.
Response surface methodology aims at finding the combination of factor levels which optimizes a response variable. A second order polynomial model is typically employed to make inference on the stationary point of the true response function. A suitable reparametrization of the polynomial model, where the coordinates of the stationary point appear as the parameter of interest, is used to derive unconstrained confidence regions for the stationary point. These regions are based on the asymptotic normal approximation to the sampling distribution of the maximum likelihood estimator of the stationary point. A simulation study is performed to evaluate the coverage probabilities of the proposed confidence regions. Some comparisons with the standard confidence regions due to Box and Hunter are also showed.  相似文献   

3.
Recently, Bolfarine et al. [Bimodal symmetric-asymmetric power-normal families. Commun Statist Theory Methods. Forthcoming. doi:10.1080/03610926.2013.765475] introduced a bimodal asymmetric model having the normal and skew normal as special cases. Here, we prove a stochastic representation for their bimodal asymmetric model and use it to generate random numbers from that model. It is shown how the resulting algorithm can be seen as an improvement over the rejection method. We also discuss practical and numerical aspects regarding the estimation of the model parameters by maximum likelihood under simple random sampling. We show that a unique stationary point of the likelihood equations exists except when all observations have the same sign. However, the location-scale extension of the model usually presents two or more roots and this fact is illustrated here. The standard maximization routines available in the R system (Broyden–Fletcher–Goldfarb–Shanno (BFGS), Trust, Nelder–Mead) were considered in our implementations but exhibited similar performance. We show the usefulness of inspecting profile loglikelihoods as a method to obtain starting values for maximization and illustrate data analysis with the location-scale model in the presence of multiple roots. A simple Bayesian model is discussed in the context of a data set which presents a flat likelihood in the direction of the skewness parameter.  相似文献   

4.
Abstract: The authors derive empirical likelihood confidence regions for the comparison distribution of two populations whose distributions are to be tested for equality using random samples. Another application they consider is to ROC curves, which are used to compare measurements of a diagnostic test from two populations. The authors investigate the smoothed empirical likelihood method for estimation in this context, and empirical likelihood based confidence intervals are obtained by means of the Wilks theorem. A bootstrap approach allows for the construction of confidence bands. The method is illustrated with data analysis and a simulation study.  相似文献   

5.
This paper gives an exposition of the use of the posterior likelihood ratio for testing point null hypotheses in a fully Bayesian framework. Connections between the frequentist P-value and the posterior distribution of the likelihood ratio are used to interpret and calibrate P-values in a Bayesian context, and examples are given to show the use of simple posterior simulation methods to provide Bayesian tests of common hypotheses.  相似文献   

6.
An empirical likelihood method was proposed by Owen and has been extended to many semiparametric and nonparametric models with a continuous response variable. However, there has been less attention focused on the generalized regression model. This article systematically studies two adjusted empirical-likelihood-based methods in the generalized varying-coefficient partially linear models. Based on the popular profile likelihood estimation procedure, the new adjusted empirical likelihood technology for the parameter is established and the resulting statistics are shown to be asymptotically standard chi-square distributed. Further, the adjusted empirical-likelihood-based confidence regions are established, and an efficient adjusted profile empirical-likelihood-based confidence intervals/regions for any components of the parameter, which are of primary interest, is also constructed. Their asymptotic properties are also derived. Some numerical studies are carried out to illustrate the performance of the proposed inference procedures.  相似文献   

7.
This paper investigates the estimations of regression parameters and response mean in nonlinear regression models in the presence of missing response variables that are missing with missingness probabilities depending on covariates. We propose four empirical likelihood (EL)-based estimators for the regression parameters and the response mean. The resulting estimators are shown to be consistent and asymptotically normal under some general assumptions. To construct the confidence regions for the regression parameters as well as the response mean, we develop four EL ratio statistics, which are proven to have the χ2 distribution asymptotically. Simulation studies and an artificial data set are used to illustrate the proposed methodologies. Empirical results show that the EL method behaves better than the normal approximation method and that the coverage probabilities and average lengths depend on the selection probability function.  相似文献   

8.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

9.
We present a scalable Bayesian modelling approach for identifying brain regions that respond to a certain stimulus and use them to classify subjects. More specifically, we deal with multi‐subject electroencephalography (EEG) data with a binary response distinguishing between alcoholic and control groups. The covariates are matrix‐variate with measurements taken from each subject at different locations across multiple time points. EEG data have a complex structure with both spatial and temporal attributes. We use a divide‐and‐conquer strategy and build separate local models, that is, one model at each time point. We employ Bayesian variable selection approaches using a structured continuous spike‐and‐slab prior to identify the locations that respond to a certain stimulus. We incorporate the spatio‐temporal structure through a Kronecker product of the spatial and temporal correlation matrices. We develop a highly scalable estimation algorithm, using likelihood approximation, to deal with large number of parameters in the model. Variable selection is done via clustering of the locations based on their duration of activation. We use scoring rules to evaluate the prediction performance. Simulation studies demonstrate the efficiency of our scalable algorithm in terms of estimation and fast computation. We present results using our scalable approach on a case study of multi‐subject EEG data.  相似文献   

10.
Two methods for testing the equality of variances in straight lines regression with a change point are considered. One is likelihood ratio test and the other is Bayesian confidence interval, based on the highest posterior density for the ratio of variances, using non-informative priors. Methods are applied to the renal transplant data analyzed by Smith and Cook(1980) and Stephens(1994).  相似文献   

11.
Summary.  Treatment of complex diseases such as cancer, leukaemia, acquired immune deficiency syndrome and depression usually follows complex treatment regimes consisting of time varying multiple courses of the same or different treatments. The goal is to achieve the largest overall benefit defined by a common end point such as survival. Adaptive treatment strategy refers to a sequence of treatments that are applied at different stages of therapy based on the individual's history of covariates and intermediate responses to the earlier treatments. However, in many cases treatment assignment depends only on intermediate response and prior treatments. Clinical trials are often designed to compare two or more adaptive treatment strategies. A common approach that is used in these trials is sequential randomization. Patients are randomized on entry into available first-stage treatments and then on the basis of the response to the initial treatments are randomized to second-stage treatments, and so on. The analysis often ignores this feature of randomization and frequently conducts separate analysis for each stage. Recent literature suggested several semiparametric and Bayesian methods for inference related to adaptive treatment strategies from sequentially randomized trials. We develop a parametric approach using mixture distributions to model the survival times under different adaptive treatment strategies. We show that the estimators proposed are asymptotically unbiased and can be easily implemented by using existing routines in statistical software packages.  相似文献   

12.
Quasi-life tables, in which the data arise from many concurrent, independent, discrete-time renewal processes, were defined by Baxter (1994, Biometrika 81:567–577), who outlined some methods for estimation. The processes are not observed individually; only the total numbers of renewals at each time point are observed. Crowder and Stephens (2003, Lifetime Data Anal 9:345–355) implemented a formal estimating-equation approach that invokes large-sample theory. However, these asymptotic methods fail to yield sensible estimates for smaller samples. In this paper, we implement a Bayesian analysis based on MCMC computation that works equally well for large and small sample sizes. We give three simulated examples, studying the Bayesian results, the impact of changing prior specification, and empirical properties of the Bayesian estimators of the lifetime distribution parameters. We also study the Baxter (1994, Biometrika 81:567–577) data, and uncover structure that has not been commented upon previously.  相似文献   

13.
The theoretical foundation for a number of model selection criteria is established in the context of inhomogeneous point processes and under various asymptotic settings: infill, increasing domain and combinations of these. For inhomogeneous Poisson processes we consider Akaike's information criterion and the Bayesian information criterion, and in particular we identify the point process analogue of ‘sample size’ needed for the Bayesian information criterion. Considering general inhomogeneous point processes we derive new composite likelihood and composite Bayesian information criteria for selecting a regression model for the intensity function. The proposed model selection criteria are evaluated using simulations of Poisson processes and cluster point processes.  相似文献   

14.
Standard response surface methodology employs a second order polynomial model to locate the stationary point ξξ of the true response function. To make Bayesian analysis more direct and simpler, we refer to an alternative and equivalent parametrization, which contains ξξ as parameter of interest. The marginal reference prior of ξξ is derived in its general form and particular cases are also given in detail, showing the Bayesian role of rotatability.  相似文献   

15.
The subject of this paper is Bayesian inference about the fixed and random effects of a mixed-effects linear statistical model with two variance components. It is assumed that a priori the fixed effects have a noninformative distribution and that the reciprocals of the variance components are distributed independently (of each other and of the fixed effects) as gamma random variables. It is shown that techniques similar to those employed in a ridge analysis of a response surface can be used to construct a one-dimensional curve that contains all of the stationary points of the posterior density of the random effects. The “ridge analysis” (of the posterior density) can be useful (from a computational standpoint) in finding the number and the locations of the stationary points and can be very informative about various features of the posterior density. Depending on what is revealed by the ridge analysis, a multivariate normal or multivariate-t distribution that is centered at a posterior mode may provide a satisfactory approximation to the posterior distribution of the random effects (which is of the poly-t form).  相似文献   

16.
Empirical likelihood for generalized linear models with missing responses   总被引:1,自引:0,他引:1  
The paper uses the empirical likelihood method to study the construction of confidence intervals and regions for regression coefficients and response mean in generalized linear models with missing response. By using the inverse selection probability weighted imputation technique, the proposed empirical likelihood ratios are asymptotically chi-squared. Our approach is to directly calibrate the empirical likelihood ratio, which is called as a bias-correction method. Also, a class of estimators for the parameters of interest is constructed, and the asymptotic distributions of the proposed estimators are obtained. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths/areas of confidence intervals/regions. An example of a real data set is used for illustrating our methods.  相似文献   

17.
In this article, we investigate various properties and methods of estimation of the Weighted Exponential distribution. Although, our main focus is on estimation (from both frequentist and Bayesian point of view) yet, the stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and order statistics are derived first time for the said distribution. Different types of loss functions are considered for Bayesian estimation. Furthermore, the Bayes estimators and their respective posterior risks are computed and compared using Gibbs sampling. The different reliability characteristics including hazard function, stress and strength analysis, and mean residual life function are also derived. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and two real data sets have been analysed for illustrative purposes.  相似文献   

18.
This paper investigates several techniques to discriminate two multivariate stationary signals. The methods considered include Gaussian likelihood ratio tests for variance equality, a chi-squared time-domain test, and a spectral-based test. The latter two tests assess equality of the multivariate autocovariance function of the two signals over many different lags. The Gaussian likelihood ratio test is perhaps best viewed as principal component analyses (PCA) without dimension reduction aspects; it can be modified to consider covariance features other than variances via dimension augmentation tactics. A simulation study is constructed that shows how one can make inappropriate conclusions with PCA tests, even when dimension augmentation techniques are used to incorporate non-zero lag autocovariances into the analysis. The various discrimination methods are first discussed. A simulation study then illuminates the various properties of the methods. In this pursuit, calculations are needed to identify several multivariate time series models with specific autocovariance properties. To demonstrate the applicability of the methods, nine US and Canadian weather stations from three distinct regions are clustered. Here, the spectral clustering perfectly identified distinct regions, the chi-squared test performed marginally, and the PCA/likelihood ratio method did not perform well.  相似文献   

19.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

20.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号