首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
The empirical best linear unbiased prediction approach is a popular method for the estimation of small area parameters. However, the estimation of reliable mean squared prediction error (MSPE) of the estimated best linear unbiased predictors (EBLUP) is a complicated process. In this paper we study the use of resampling methods for MSPE estimation of the EBLUP. A cross-sectional and time-series stationary small area model is used to provide estimates in small areas. Under this model, a parametric bootstrap procedure and a weighted jackknife method are introduced. A Monte Carlo simulation study is conducted in order to compare the performance of different resampling-based measures of uncertainty of the EBLUP with the analytical approximation. Our empirical results show that the proposed resampling-based approaches performed better than the analytical approximation in several situations, although in some cases they tend to underestimate the true MSPE of the EBLUP in a higher number of small areas.  相似文献   

2.
In this article, we provide analytical, simulation, and empirical evidence on a test of equal economic value from competing predictive models of asset returns. We define economic value using the concept of a performance fee—the amount an investor would be willing to pay to have access to an alternative predictive model used to make investment decisions. We establish that this fee can be asymptotically normal under modest assumptions. Monte Carlo evidence shows that our test can be accurately sized in reasonably large samples. We apply the proposed test to predictions of the U.S. equity premium.  相似文献   

3.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

4.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

5.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

6.
In this paper we obtain several influence measures for the multivariate linear general model through the approach proposed by Muñoz-Pichardo et al. (1995), which is based on the concept of conditional bias. An interesting charasteristic of this approach is that it does not require any distributional hypothesis. Appling the obtained results to the multivariate regression model, we obtain some measures proposed by other authors. Nevertheless, on the results obtained in this paper, we emphasize two aspects. First, they provide a theoretical foundation for measures proposed by other authors for the mul¬tivariate regression model. Second, they can be applied to any linear model that can be formulated as a particular case of the multivariate linear general model. In particular, we carry out an application to the multivariate analysis of covariance.  相似文献   

7.
As a result of their good performance in practice and their desirable analytical properties, Gaussian process regression models are becoming increasingly of interest in statistics, engineering and other fields. However, two major problems arise when the model is applied to a large data-set with repeated measurements. One stems from the systematic heterogeneity among the different replications, and the other is the requirement to invert a covariance matrix which is involved in the implementation of the model. The dimension of this matrix equals the sample size of the training data-set. In this paper, a Gaussian process mixture model for regression is proposed for dealing with the above two problems, and a hybrid Markov chain Monte Carlo (MCMC) algorithm is used for its implementation. Application to a real data-set is reported.  相似文献   

8.
In this paper we consider a Markovian perfect debugging model for which the software failure is caused by two types of faults, one which is easily detected and the other which is difficult to detect. When a failure occurs, a perfect debugging is immediately performed and consequently one fault is reduced from fault contents. We also treat the debugging time as a variable to develop a new debugging model. Based on the perfect debugging model, we propose an optimal software release policy that satisfies the requirements for both software reliability and expected number of faults which are required to achieve before releasing the software. Several measures, including the distribution of first passage time to the specified number of removed faults, are also obtained using the proposed debugging model.  相似文献   

9.
ABSTRACT

Value-at-Risk (VaR) is one of the best known and most heavily used measures of financial risk. In this paper, we introduce a non-iterative semiparametric model for VaR estimation called the single index quantile regression time series (SIQRTS) model. To test its performance, we give an application to four major US market indices: the S&P 500 Index, the Russell 2000 Index, the Dow Jones Industrial Average, and the NASDAQ Composite Index. Our results suggest that this method has a good finite sample performance and often outperforms a number of commonly used methods.  相似文献   

10.
The purpose of this article is to study Kataoka's safety-first (KSF) model, which is a representative of safety-first models of most popular models in portfolio selection of modern finance. We obtain conditions that guarantee that the KSF model has a finite optimal solution without normality assumption. When short-sell is allowed, we provide an explicit analytical solution of the KSF model in two cases. When short-sell is not allowed, we propose an iterating algorithm for finding the optimal portfolios of the KSF model. We also investigate a KSF model with constraint of mean return and obtain the explicit analytical expression of the optimal portfolio.  相似文献   

11.
Conditional bias and asymptotic mean sensitivity curve (AMSC) are useful measures to assess the possible effect of an observation on an estimator when sampling from a parametric model. In this paper we obtain expressions for these measures in truncated distributions and study their theoretical properties. Specific results are given for the UMVUE of a parametric function. We note that the AMSC for the UMVUE in truncated distributions verifies some of the most relevant properties we got in a previous paper for the AMSC of UMVUE in the NEF-QVF case, main differences are also established. As for the conditional bias, since it is a finite sample measure, we include some practical examples to illustrate its behaviour when the sample size increases.  相似文献   

12.
This paper develops a bias correction scheme for a multivariate heteroskedastic errors-in-variables model. The applicability of this model is justified in areas such as astrophysics, epidemiology and analytical chemistry, where the variables are subject to measurement errors and the variances vary with the observations. We conduct Monte Carlo simulations to investigate the performance of the corrected estimators. The numerical results show that the bias correction scheme yields nearly unbiased estimates. We also give an application to a real data set.  相似文献   

13.
In this paper, we extend the focused information criterion (FIC) to copula models. Copulas are often used for applications where the joint tail behavior of the variables is of particular interest, and selecting a copula that captures this well is then essential. Traditional model selection methods such as the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) aim at finding the overall best‐fitting model, which is not necessarily the one best suited for the application at hand. The FIC, on the other hand, evaluates and ranks candidate models based on the precision of their point estimates of a context‐given focus parameter. This could be any quantity of particular interest, for example, the mean, a correlation, conditional probabilities, or measures of tail dependence. We derive FIC formulae for the maximum likelihood estimator, the two‐stage maximum likelihood estimator, and the so‐called pseudo‐maximum‐likelihood (PML) estimator combined with parametric margins. Furthermore, we confirm the validity of the AIC formula for the PML estimator combined with parametric margins. To study the numerical behavior of FIC, we have carried out a simulation study, and we have also analyzed a multivariate data set pertaining to abalones. The results from the study show that the FIC successfully ranks candidate models in terms of their performance, defined as how well they estimate the focus parameter. In terms of estimation precision, FIC clearly outperforms AIC, especially when the focus parameter relates to only a specific part of the model, such as the conditional upper‐tail probability.  相似文献   

14.
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.  相似文献   

15.
The area under the receiver operating characteristic curve (AUC) is the most commonly reported measure of discrimination for prediction models with binary outcomes. However, recently it has been criticized for its inability to increase when important risk factors are added to a baseline model with good discrimination. This has led to the claim that the reliance on the AUC as a measure of discrimination may miss important improvements in clinical performance of risk prediction rules derived from a baseline model. In this paper we investigate this claim by relating the AUC to measures of clinical performance based on sensitivity and specificity under the assumption of multivariate normality. The behavior of the AUC is contrasted with that of discrimination slope. We show that unless rules with very good specificity are desired, the change in the AUC does an adequate job as a predictor of the change in measures of clinical performance. However, stronger or more numerous predictors are needed to achieve the same increment in the AUC for baseline models with good versus poor discrimination. When excellent specificity is desired, our results suggest that the discrimination slope might be a better measure of model improvement than AUC. The theoretical results are illustrated using a Framingham Heart Study example of a model for predicting the 10-year incidence of atrial fibrillation.  相似文献   

16.
In this paper, we suggest a technique to quantify model risk, particularly model misspecification for binary response regression problems found in financial risk management, such as in credit risk modelling. We choose the probability of default model as one instance of many other credit risk models that may be misspecified in a financial institution. By way of illustrating the model misspecification for probability of default, we carry out quantification of two specific statistical predictive response techniques, namely the binary logistic regression and complementary log–log. The maximum likelihood estimation technique is employed for parameter estimation. The statistical inference, precisely the goodness of fit and model performance measurements, are assessed. Using the simulation dataset and Taiwan credit card default dataset, our finding reveals that with the same sample size and very small simulation iterations, the two techniques produce similar goodness-of-fit results but completely different performance measures. However, when the iterations increase, the binary logistic regression technique for balanced dataset reveals prominent goodness of fit and performance measures as opposed to the complementary log–log technique for both simulated and real datasets.  相似文献   

17.
The detection of outliers and influential observations has received a great deal of attention in the statistical literature in the context of least-squares (LS) regression. However, the explanatory variables can be correlated with each other and alternatives to LS come out to address outliers/influential observations and multicollinearity, simultaneously. This paper proposes new influence measures based on the affine combination type regression for the detection of influential observations in the linear regression model when multicollinearity exists. Approximate influence measures are also proposed for the affine combination type regression. Since the affine combination type regression includes the ridge, the Liu and the shrunken regressions as special cases, influence measures under the ridge, the Liu and the shrunken regressions are also examined to see the possible effect that multicollinearity can have on the influence of an observation. The Longley data set is given illustrating the influence measures in affine combination type regression and also in ridge, Liu and shrunken regressions so that the performance of different biased regressions on detecting and assessing the influential observations is examined.  相似文献   

18.
Abstract

Confusion Matrix is an important measure to evaluate the accuracy of credit scoring models. However, the literature about Confusion Matrix is limited. The analytical properties of Confusion Matrix are ignored. Moreover, the concept of Confusion Matrix is confusing. In this article, we systematically study Confusion Matrix and its analytical properties. We enumerate 16 possible variants of Confusion Matrix and show that only 8 are reasonable. We study the relationship between Confusion Matrix and 2 other performance measures: the receiver operating characteristic curve (ROC) and Kolmogorov-Smirnov statistic (KS). We show that an optimal cutoff score can be attained by KS.  相似文献   

19.
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al., 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.  相似文献   

20.
Recently, a body of literature proposed new models relaxing a widely-used but controversial assumption of independence between claim frequency and severity in non-life insurance rate making. This paper critically reviews a generalized linear model approach, where a dependence between claim frequency and severity is introduced by treating frequency as a covariate in a regression model for severity. As an extension of this approach, we propose a dispersion model for severity. For this model, the information loss caused by using average severity rather than individual severity is examined in detail and the parameter estimators suffering from low efficiency are identified. We also provide analytical solutions for the aggregate sum to help rate making. We show that the simple functional form used in current research may not properly reflect the real underlying dependence structure. A real data analysis is given to explain our analytical findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号