首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we use bootstrap methodology to achieve accurate estimated prediction intervals for recovery rates. In the framework of the LossCalc model, which is the Moody's KMV model to predict loss given default, a single beta distribution is usually assumed to model the behaviour of recovery rates and, hence, to construct prediction intervals. We evaluate the coverage properties of beta estimated prediction intervals for multimodal recovery rates. We carry out a simulation study, and our results show that bootstrap versions of beta mixture prediction intervals exhibit the best coverage properties.  相似文献   

2.
A bootstrap algorithm is proposed for testing Gaussianity and linearity in stationary time series, and consistency of the relevant bootstrap approximations is proven rigorously for the first time. Subba Rao and Gabr (1980) and Hinich (1982) have formulated some well-known nonparametric tests for Gaussianity and linearity based on the asymptotic distribution of the normalized bispectrum. The proposed bootstrap procedure gives an alternative way to approximate the finite-sample null distribution of such test statistics. We revisit a modified form of Hinich's test utilizing kernel smoothing, and compare its performance to the bootstrap test on several simulated data sets and two real data sets—the S&P 500 returns and the quarterly US real GNP growth rate. Interestingly, Hinich's test and the proposed bootstrapped version yield substantially different results when testing Gaussianity and linearity of the GNP data.  相似文献   

3.
Fast and robust bootstrap   总被引:1,自引:0,他引:1  
In this paper we review recent developments on a bootstrap method for robust estimators which is computationally faster and more resistant to outliers than the classical bootstrap. This fast and robust bootstrap method is, under reasonable regularity conditions, asymptotically consistent. We describe the method in general and then consider its application to perform inference based on robust estimators for the linear regression and multivariate location-scatter models. In particular, we study confidence and prediction intervals and tests of hypotheses for linear regression models, inference for location-scatter parameters and principal components, and classification error estimation for discriminant analysis.  相似文献   

4.
In this paper we introduce a procedure to compute prediction intervals for FARIMA (p d q) processes, taking into account the variability due to model identification and parameter estimation. To this aim, a particular bootstrap technique is developed. The performance of the prediction intervals is then assessed and compared to that of stand­ard bootstrap percentile intervals. The methods are applied to the time series of Nile River annual minima.  相似文献   

5.
We investigate the construction of a BCa-type bootstrap procedure for setting approximate prediction intervals for an efficient estimator θm of a scalar parameter θ, based on a future sample of size m. The results are also extended to nonparametric situations, which can be used to form bootstrap prediction intervals for a large class of statistics. These intervals are transformation-respecting and range-preserving. The asymptotic performance of our procedure is assessed by allowing both the past and future sample sizes to tend to infinity. The resulting intervals are then shown to be second-order correct and second-order accurate. These second-order properties are established in terms of min(m, n), and not the past sample size n alone.  相似文献   

6.
A non-stationary integer-valued autoregressive model   总被引:1,自引:0,他引:1  
It is frequent to encounter a time series of counts which are small in value and show a trend having relatively large fluctuation. To handle such a non-stationary integer-valued time series with a large dispersion, we introduce a new process called integer-valued autoregressive process of order p with signed binomial thinning (INARS(p)). This INARS(p) uniquely exists and is stationary under the same stationary condition as in the AR(p) process. We provide the properties of the INARS(p) as well as the asymptotic normality of the estimates of the model parameters. This new process includes previous integer-valued autoregressive processes as special cases. To preserve integer-valued nature of the INARS(p) and to avoid difficulty in deriving the distributional properties of the forecasts, we propose a bootstrap approach for deriving forecasts and confidence intervals. We apply the INARS(p) to the frequency of new patients diagnosed with acquired immunodeficiency syndrome (AIDS) in Baltimore, Maryland, U.S. during the period of 108 months from January 1993 to December 2001.  相似文献   

7.
The sieve bootstrap (SB) prediction intervals for invertible autoregressive moving average (ARMA) processes are constructed using resamples of residuals obtained by fitting a finite degree autoregressive approximation to the time series. The advantage of this approach is that it does not require the knowledge of the orders, p and q, associated with the ARMA(p, q) model. Up until recently, the application of this method has been limited to ARMA processes whose autoregressive polynomials do not have fractional unit roots. The authors, in a 2012 publication, introduced a version of the SB suitable for fractionally integrated autoregressive moving average (FARIMA (p,d,q)) processes with 0<d<0.5 and established its asymptotic validity. Herein, we study the finite sample properties this new method and compare its performance against an older method introduced by Bisaglia and Grigoletto in 2001. The sieve bootstrap (SB) method is a numerically simpler alternative to the older method which requires the estimation of p, d, and q at every bootstrap step. Monte-Carlo simulation studies, carried out under the assumption of normal, mixture of normals, and exponential distributions for the innovations, show near nominal coverages for short-term and long-term SB prediction intervals under most situations. In addition, the sieve bootstrap method yields better coverage and narrower intervals compared to the Bisaglia–Grigoletto method in some situations, especially when the error distribution is a mixture of normals.  相似文献   

8.
We construct bootstrap confidence intervals for smoothing spline estimates based on Gaussian data, and penalized likelihood smoothing spline estimates based on data from .exponential families. Several vari- ations of bootstrap confidence intervals are considered and compared. We find that the commonly used ootstrap percentile intervals are inferior to the T intervals and to intervals based on bootstrap estimation of mean squared errors. The best variations of the bootstrap confidence intervals behave similar to the well known Bayesian confidence intervals. These bootstrap confidence intervals have an average coverage probability across the function being estimated, as opposed to a pointwise property.  相似文献   

9.
The Bootstrap and Kriging Prediction Intervals   总被引:1,自引:0,他引:1  
Kriging is a method for spatial prediction that, given observations of a spatial process, gives the optimal linear predictor of the process at a new specified point. The kriging predictor may be used to define a prediction interval for the value of interest. The coverage of the prediction interval will, however, equal the nominal desired coverage only if it is constructed using the correct underlying covariance structure of the process. If this is unknown, it must be estimated from the data. We study the effect on the coverage accuracy of the prediction interval of substituting the true covariance parameters by estimators, and the effect of bootstrap calibration of coverage properties of the resulting 'plugin' interval. We demonstrate that plugin and bootstrap calibrated intervals are asymptotically accurate in some generality and that bootstrap calibration appears to have a significant effect in improving the rate of convergence of coverage error.  相似文献   

10.

We consider a sieve bootstrap procedure to quantify the estimation uncertainty of long-memory parameters in stationary functional time series. We use a semiparametric local Whittle estimator to estimate the long-memory parameter. In the local Whittle estimator, discrete Fourier transform and periodogram are constructed from the first set of principal component scores via a functional principal component analysis. The sieve bootstrap procedure uses a general vector autoregressive representation of the estimated principal component scores. It generates bootstrap replicates that adequately mimic the dependence structure of the underlying stationary process. We first compute the estimated first set of principal component scores for each bootstrap replicate and then apply the semiparametric local Whittle estimator to estimate the memory parameter. By taking quantiles of the estimated memory parameters from these bootstrap replicates, we can nonparametrically construct confidence intervals of the long-memory parameter. As measured by coverage probability differences between the empirical and nominal coverage probabilities at three levels of significance, we demonstrate the advantage of using the sieve bootstrap compared to the asymptotic confidence intervals based on normality.

  相似文献   

11.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   

12.
In this article bootstrap confidence intervals of process capability index as suggested by Chen and Pearn [An application of non-normal process capability indices. Qual Reliab Eng Int. 1997;13:355–360] are studied through simulation when the underlying distributions are inverse Rayleigh and log-logistic distributions. The well-known maximum likelihood estimator is used to estimate the parameter. The bootstrap confidence intervals considered in this paper consists of various confidence intervals. A Monte Carlo simulation has been used to investigate the estimated coverage probabilities and average widths of the bootstrap confidence intervals. Application examples on two distributions for process capability indices are provided for practical use.  相似文献   

13.
ABSTRACT

Regression analysis is one of the important tools in statistics to investigate the relationships among variables. When the sample size is small, however, the assumptions for regression analysis can be violated. This research focuses on using the exact bootstrap to construct confidence intervals for regression parameters in small samples. The comparison of the exact bootstrap method with the basic bootstrap method was carried out by a simulation study. It was found that on a very small sample (n ≈ 5) under Laplace distribution with the independent variable treated as random, the exact bootstrap was more effective than the standard bootstrap confidence interval.  相似文献   

14.
We propose a new approach to the selection of partially linear models based on the conditional expected prediction square loss function, which is estimated using the bootstrap. Because of the different speeds of convergence of the linear and the nonlinear parts, a key idea is to select each part separately. In the first step, we select the nonlinear components using an ' m -out-of- n ' residual bootstrap that ensures good properties for the nonparametric bootstrap estimator. The second step selects the linear components from the remaining explanatory variables, and the non-zero parameters are selected based on a two-level residual bootstrap. We show that the model selection procedure is consistent under some conditions, and our simulations suggest that it selects the true model most often than the other selection procedures considered.  相似文献   

15.
We study various bootstrap and permutation methods for matched pairs, whose distributions can have different shapes even under the null hypothesis of no treatment effect. Although the data may not be exchangeable under the null, we investigate different permutation approaches as valid procedures for finite sample sizes. It will be shown that permutation or bootstrap schemes, which neglect the dependency structure in the data, are asymptotically valid. Simulation studies show that these new tests improve the power of the t-test under non-normality.  相似文献   

16.
For the general linear regression model Y = Xη + e, we construct small-sample exponentially tilted empirical confidence intervals for a linear parameter 6 = aTη and for nonlinear functions of η. The coverage error for the intervals is Op(1/n), as shown in Tingley and Field (1990). The technique, though sample-based, does not require bootstrap resampling. The first step is calculation of an estimate for η. We have used a Mallows estimate. The algorithm applies whenever η is estimated as the solution of a system of equations having expected value 0. We include calculations of the relative efficiency of the estimator (compared with the classical least-squares estimate). The intervals are compared with asymptotic intervals as found, for example, in Hampel et at. (1986). We demonstrate that the procedure gives sensible intervals for small samples.  相似文献   

17.
Multi-asset modelling is of fundamental importance to financial applications such as risk management and portfolio selection. In this article, we propose a multivariate stochastic volatility modelling framework with a parsimonious and interpretable correlation structure. Building on well-established evidence of common volatility factors among individual assets, we consider a multivariate diffusion process with a common-factor structure in the volatility innovations. Upon substituting an observable market proxy for the common volatility factor, we markedly improve the estimation of several model parameters and latent volatilities. The model is applied to a portfolio of several important constituents of the S&P500 in the financial sector, with the VIX index as the common-factor proxy. We find that the prediction intervals for asset forecasts are comparable to those of more complex dependence models, but that option-pricing uncertainty can be greatly reduced by adopting a common-volatility structure. The Canadian Journal of Statistics 48: 36–61; 2020 © 2020 Statistical Society of Canada  相似文献   

18.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

19.
In epidemiological surveillance it is important that any unusual increase of reported cases be detected as rapidly as possible. Reliable forecasting based on a suitable time series model for an epidemiological indicator is necessary for estimating the expected non-epidemic indicator and to elaborate an alert threshold. Time series analyses of acute diseases often use Gaussian autoregressive integrated moving average models. However, these approaches can be adversely affected by departures from the true underlying distribution. The objective of this paper is to introduce a bootstrap procedure for obtaining prediction intervals in linear models in order to avoid the normality assumption. We present a Monte Carlo study comparing the finite sample properties of bootstrap prediction intervals with those of alternative methods. Finally, we illustrate the performance of the proposed method with a meningococcal disease incidence series.  相似文献   

20.
Some studies of the bootstrap have assessed the effect of smoothing the estimated distribution that is resampled, a process usually known as the smoothed bootstrap. Generally, the smoothed distribution for resampling is a kernel estimate and is often rescaled to retain certain characteristics of the empirical distribution. Typically the effect of such smoothing has been measured in terms of the mean-squared error of bootstrap point estimates. The reports of these previous investigations have not been encouraging about the efficacy of smoothing. In this paper the effect of resampling a kernel-smoothed distribution is evaluated through expansions for the coverage of bootstrap percentile confidence intervals. It is shown that, under the smooth function model, proper bandwidth selection can accomplish a first-order correction for the one-sided percentile method. With the objective of reducing the coverage error the appropriate bandwidth for one-sided intervals converges at a rate of n −1/4, rather than the familiar n −1/5 for kernel density estimation. Applications of this same approach to bootstrap t and two-sided intervals yield optimal bandwidths of order n −1/2. These bandwidths depend on moments of the smooth function model and not on derivatives of the underlying density of the data. The relationship of this smoothing method to both the accelerated bias correction and the bootstrap t methods provides some insight into the connections between three quite distinct approximate confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号