首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose some Stein-rule combination forecasting methods that are designed to ameliorate the estimation risk inherent in making operational the variance–covariance method for constructing combination weights. By Monte Carlo simulation, it is shown that this amelioration can be substantial in many cases. Moreover, generalized Stein-rule combinations are proposed that offer the user the opportunity to enhance combination forecasting performance when shrinking the feasible variance–covariance weights toward a fortuitous shrinkage point. In an empirical exercise, the proposed Stein-rule combinations performed well relative to competing combination methods.  相似文献   

2.
Variance estimation under systematic sampling with probability proportional to size is known to be a difficult problem. We attempt to tackle this problem by the bootstrap resampling method. It is shown that the usual way to bootstrap fails to give satisfactory variance estimates. As a remedy, we propose a double bootstrap method which is based on certain working models and involves two levels of resampling. Unlike existing methods which deal exclusively with the Horvitz–Thompson estimator, the double bootstrap method can be used to estimate the variance of any statistic. We illustrate this within the context of both mean and median estimation. Empirical results based on five natural populations are encouraging.  相似文献   

3.
Resampling methods are a common measure to estimate the variance of a statistic of interest when data consist of nonresponse and imputation is used as compensation. Applying resampling methods usually means that subsamples are drawn from the original sample and that variance estimates are computed based on point estimators of several subsamples. However, newer resampling methods such as the rescaling bootstrap of Chipperfield and Preston [Efficient bootstrap for business surveys. Surv Methodol. 2007;33:167–172] include all elements of the original sample in the computation of its point estimator. Thus, procedures to consider imputation in resampling methods cannot be applied in the ordinary way. For such methods, modifications are necessary. This paper presents an approach applying newer resampling methods for imputed data. The Monte Carlo simulation study conducted in the paper shows that the proposed approach leads to reliable variance estimates in contrast to other modifications.  相似文献   

4.
Abstract

This paper presents a new method to estimate the quantiles of generic statistics by combining the concept of random weighting with importance resampling. This method converts the problem of quantile estimation to a dual problem of tail probabilities estimation. Random weighting theories are established to calculate the optimal resampling weights for estimation of tail probabilities via sequential variance minimization. Subsequently, the quantile estimation is constructed by using the obtained optimal resampling weights. Experimental results on real and simulated data sets demonstrate that the proposed random weighting method can effectively estimate the quantiles of generic statistics.  相似文献   

5.
For m–dependent, identically distributed random observation, the bootstrap method provides inconsistent estimators of the distribution and variance of the sample mean. This paper proposes an alternative resampling procedure. For estimating the distribution and variance of a function of the sample mean, the proposed resampling estimators are shown to be strongly consistent.  相似文献   

6.
Imputation is often used in surveys to treat item nonresponse. It is well known that treating the imputed values as observed values may lead to substantial underestimation of the variance of the point estimators. To overcome the problem, a number of variance estimation methods have been proposed in the literature, including resampling methods such as the jackknife and the bootstrap. In this paper, we consider the problem of doubly robust inference in the presence of imputed survey data. In the doubly robust literature, point estimation has been the main focus. In this paper, using the reverse framework for variance estimation, we derive doubly robust linearization variance estimators in the case of deterministic and random regression imputation within imputation classes. Also, we study the properties of several jackknife variance estimators under both negligible and nonnegligible sampling fractions. A limited simulation study investigates the performance of various variance estimators in terms of relative bias and relative stability. Finally, the asymptotic normality of imputed estimators is established for stratified multistage designs under both deterministic and random regression imputation. The Canadian Journal of Statistics 40: 259–281; 2012 © 2012 Statistical Society of Canada  相似文献   

7.
Comparison of different estimation techniques for portfolio selection   总被引:1,自引:0,他引:1  
The main problem in applying the mean-variance portfolio selection consists of the fact that the first two moments of the asset returns are unknown. In practice the optimal portfolio weights have to be estimated. This is usually done by replacing the moments by the classical unbiased sample estimators. We provide a comparison of the exact and the asymptotic distributions of the estimated portfolio weights as well as a sensitivity analysis to shifts in the moments of the asset returns. Furthermore we consider several types of shrinkage estimators for the moments. The corresponding estimators of the portfolio weights are compared with each other and with the portfolio weights based on the sample estimators of the moments. We show how the uncertainty about the portfolio weights can be introduced into the performance measurement of trading strategies. The methodology explains the bad out-of-sample performance of the classical Markowitz procedures.  相似文献   

8.
Based on recent developments in the field of operations research, we propose two adaptive resampling algorithms for estimating bootstrap distributions. One algorithm applies the principle of the recently proposed cross-entropy (CE) method for rare event simulation, and does not require calculation of the resampling probability weights via numerical optimization methods (e.g., Newton's method), whereas the other algorithm can be viewed as a multi-stage extension of the classical two-step variance minimization approach. The two algorithms can be easily used as part of a general algorithm for Monte Carlo calculation of bootstrap confidence intervals and tests, and are especially useful in estimating rare event probabilities. We analyze theoretical properties of both algorithms in an idealized setting and carry out simulation studies to demonstrate their performance. Empirical results on both one-sample and two-sample problems as well as a real survival data set show that the proposed algorithms are not only superior to traditional approaches, but may also provide more than an order of magnitude of computational efficiency gains.  相似文献   

9.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

10.
To improve the out-of-sample performance of the portfolio, Lasso regularization is incorporated to the Mean Absolute Deviance (MAD)-based portfolio selection method. It is shown that such a portfolio selection problem can be reformulated as a constrained Least Absolute Deviance problem with linear equality constraints. Moreover, we propose a new descent algorithm based on the ideas of ‘nonsmooth optimality conditions’ and ‘basis descent direction set’. The resulting MAD-Lasso method enjoys at least two advantages. First, it does not involve the estimation of covariance matrix that is difficult particularly in the high-dimensional settings. Second, sparsity is encouraged. This means that assets with weights close to zero in the Markovwitz's portfolio are driven to zero automatically. This reduces the management cost of the portfolio. Extensive simulation and real data examples indicate that if the Lasso regularization is incorporated, MAD portfolio selection method is consistently improved in terms of out-of-sample performance, measured by Sharpe ratio and sparsity. Moreover, simulation results suggest that the proposed descent algorithm is more time-efficient than interior point method and ADMM algorithm.  相似文献   

11.
如何解决网络访问固定样本调查的统计推断问题,是大数据背景下网络调查面临的严重挑战。针对此问题,提出将网络访问固定样本的调查样本与概率样本结合,利用倾向得分逆加权和加权组调整构造伪权数来估计目标总体,进一步采用基于有放回概率抽样的Vwr方法、基于广义回归估计的Vgreg方法与Jackknife方法来估计方差,并比较不同方法估计的效果。研究表明:无论概率样本的样本量较大还是较小,本研究所提出的总体均值估计方法效果较好,并且在方差估计中Jackknife方法的估计效果最好。  相似文献   

12.
We investigate if portfolios can be improved if the classical Markowitz mean–variance portfolio theory is combined with recently proposed change point tests for dependence measures. Taking into account that the dependence structure of financial assets typically cannot be assumed to be constant over longer periods of time, we estimate the covariance matrix of the assets, which is used to construct global minimum-variance portfolios, by respecting potential change points. It is seen that a recently proposed test for changes in the whole covariance matrix is indeed partially useful whereas pairwise tests for variances and correlations are not suitable for these applications without further adjustments.  相似文献   

13.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

14.
A nested case–control (NCC) study is an efficient cohort-sampling design in which a subset of controls are sampled from the risk set at each event time. Since covariate measurements are taken only for the sampled subjects, time and efforts of conducting a full scale cohort study can be saved. In this paper, we consider fitting a semiparametric accelerated failure time model to failure time data from a NCC study. We propose to employ an efficient induced smoothing procedure for rank-based estimating method for regression parameters estimation. For variance estimation, we propose to use an efficient resampling method that utilizes the robust sandwich form. We extend our proposed methods to a generalized NCC study that allows a sampling of cases. Finite sample properties of the proposed estimators are investigated via an extensive stimulation study. An application to a tumor study illustrates the utility of the proposed method in routine data analysis.  相似文献   

15.
Resampling methods are proposed to estimate the distributions of sums of m -dependent possibly differently distributed real-valued random variables. The random variables are allowed to have varying mean values. A non parametric resampling method based on the moving blocks bootstrap is proposed for the case in which the mean values are smoothly varying or 'asymptotically equal'. The idea is to resample blocks in pairs. It is also confirmed that a 'circular' block resampling scheme can be used in the case where the mean values are 'asymptotically equal'. A central limit resampling theorem for each of the two cases is proved. The resampling methods have a potential application to time series analysis, to distinguish between two different forecasting models. This is illustrated with an example using Swedish export prices of coated paper products.  相似文献   

16.
This paper shows that a minimax Bayes rule and shrinkage estimators can be effectively applied to portfolio selection under the Bayesian approach. Specifically, it is shown that the portfolio selection problem can result in a statistical decision problem in some situations. Following that, we present a method for solving a problem involved in portfolio selection under the Bayesian approach.  相似文献   

17.
To obtain estimators of mean-variance optimal portfolio weights, Stein-type estimators of the mean vector that shrink a sample mean towards the grand mean have been applied. However, the dominance of these estimators has not been shown under the loss function used in the estimation problem of the mean-variance optimal portfolio weights, which is different than the quadratic function for the case in which the covariance matrix is unknown. We analytically give the conditions for Stein-type estimators that shrink towards the grand mean, or more generally, towards a linear subspace, to improve upon the classical estimators, which are obtained by simply plugging in sample estimates. We also show the dominance when there are linear constraints on portfolio weights.  相似文献   

18.
We discuss the impact of tuning parameter selection uncertainty in the context of shrinkage estimation and propose a methodology to account for problems arising from this issue: Transferring established concepts from model averaging to shrinkage estimation yields the concept of shrinkage averaging estimation (SAE) which reflects the idea of using weighted combinations of shrinkage estimators with different tuning parameters to improve overall stability, predictive performance and standard errors of shrinkage estimators. Two distinct approaches for an appropriate weight choice, both of which are inspired by concepts from the recent literature of model averaging, are presented: The first approach relates to an optimal weight choice with regard to the predictive performance of the final weighted estimator and its implementation can be realized via quadratic programming. The second approach has a fairly different motivation and considers the construction of weights via a resampling experiment. Focusing on Ridge, Lasso and Random Lasso estimators, the properties of the proposed shrinkage averaging estimators resulting from these strategies are explored by means of Monte-Carlo studies and are compared to traditional approaches where the tuning parameter is simply selected via cross validation criteria. The results show that the proposed SAE methodology can improve an estimators’ overall performance and reveal and incorporate tuning parameter uncertainty. As an illustration, selected methods are applied to some recent data from a study on leadership behavior in life science companies.  相似文献   

19.
ABSTRACT

Stress testing correlation matrix is a challenging exercise for portfolio risk management. Most existing methods directly modify the estimated correlation matrix to satisfy stress conditions while maintaining positive semidefiniteness. The focus lies on technical optimization issues but the resultant stressed correlation matrices usually lack statistical interpretations. In this article, we suggest a novel approach using Empirical Likelihood method to modify the probability weights of sample observations to construct a stressed correlation matrix. The resultant correlations correspond to a stress scenario that is nearest to the observed scenario in a Kullback–Leibler divergence sense. Besides providing a clearer statistical interpretation, the proposed method is non-parametric in distribution, simple in computation and free from subjective tunings. We illustrate the method through an application to a portfolio of international assets.  相似文献   

20.
The binary logistic regression is a widely used statistical method when the dependent variable has two categories. In most of the situations of logistic regression, independent variables are collinear which is called the multicollinearity problem. It is known that multicollinearity affects the variance of maximum likelihood estimator (MLE) negatively. Therefore, this article introduces new shrinkage parameters for the Liu-type estimators in the Liu (2003) in the logistic regression model defined by Huang (2012) in order to decrease the variance and overcome the problem of multicollinearity. A Monte Carlo study is designed to show the goodness of the proposed estimators over MLE in the sense of mean squared error (MSE) and mean absolute error (MAE). Moreover, a real data case is given to demonstrate the advantages of the new shrinkage parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号