首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对灰色聚类指标权重确定的问题,通过定义白化权函数的分类区分度来度量各指标对聚类对象的分类所作的贡献,并据此确定分类指标的权重。在此基础上,提出了变权灰色聚类方法。结果表明,该方法能够融合聚类对象的样本信息和专家的经验,有效确定不同聚类对象的各指标权重,且适用于聚类指标的量纲不同、数量级悬殊较大的情形。最后通过一个实例说明了变权灰色聚类的实用性和有效性。  相似文献   

2.
Multiple Hypotheses Testing with Weights   总被引:2,自引:0,他引:2  
In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights. Some rationale for incorporating weights in multiple hypotheses testing are discussed. Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems. An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained. This model serves as a vehicle for demonstrating the different implications of the approaches to weighting. Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate. Extensions and modifications of the procedures based on Simes (1986) are given. These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses. The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights.  相似文献   

3.
The authors consider the construction of weights for Generalised M‐estimation. Such weights, when combined with appropriate score functions, afford protection from biases arising through incorrectly specified response functions, as well as from natural variation. The authors obtain minimax fixed weights of the Mallows type under the assumption that the density of the independent variables is correctly specified, and they obtain adaptive weights when this assumption is relaxed. A simulation study indicates that one can expect appreciable gains in precision when the latter weights are used and the various sources of model uncertainty are present.  相似文献   

4.
We address the task of choosing prior weights for models that are to be used for weighted model averaging. Models that are very similar should usually be given smaller weights than models that are quite distinct. Otherwise, the importance of a model in the weighted average could be increased by augmenting the set of models with duplicates of the model or virtual duplicates of it. Similarly, the importance of a particular model feature (a certain covariate, say) could be exaggerated by including many models with that feature. Ways of forming a correlation matrix that reflects the similarity between models are suggested. Then, weighting schemes are proposed that assign prior weights to models on the basis of this matrix. The weighting schemes give smaller weights to models that are more highly correlated. Other desirable properties of a weighting scheme are identified, and we examine the extent to which these properties are held by the proposed methods. The weighting schemes are applied to real data, and prior weights, posterior weights and Bayesian model averages are determined. For these data, empirical Bayes methods were used to form the correlation matrices that yield the prior weights. Predictive variances are examined, as empirical Bayes methods can result in unrealistically small variances.  相似文献   

5.
ABSTRACT

We consider Pitman-closeness to evaluate the performance of univariate and multivariate forecasting methods. Optimal weights for the combination of forecasts are calculated with respect to this criterion. These weights depend on the assumption of the distribution of the individual forecasts errors. In the normal case they are identical with the optimal weights with respect to the MSE-criterion (univariate case) and with the optimal weights with respect to the MMSE-criterion (multivariate case). Further, we present a simple example to show how the different combination techniques perform. There we can see how much the optimal multivariate combination can outperform different other combinations. In practice, we can find multivariate forecasts e.g., in econometrics. There is often the situation that forecast institutes estimate several economic variables.  相似文献   

6.
All existing location-scale rank tests use equal weights for the components. We advocate the use of weighted combinations of statistics. This approach can partly be substantiated by the theory of locally most powerful tests. We specifically investi= gate a Wilcoxon-Mood combination. We give exact critical values for a range of weights. The asymptotic normality of the test statistic is proved under a general hypothesis and Chernoff-Savage conditions. The asymptotic relative efficiency of this test with respect to unweighted combinations shows that a careful choice of weights results in a gain in efficiency.  相似文献   

7.
In this paper, we consider the estimated weights of the tangency portfolio. We derive analytical expressions for the higher order non-central and central moments of these weights when the returns are assumed to be independently and multivariate normally distributed. Moreover, the expressions for mean, variance, skewness and kurtosis of the estimated weights are obtained in closed forms. Later, we complement our results with a simulation study where data from the multivariate normal and t-distributions are simulated, and the first four moments of estimated weights are computed by using the Monte Carlo experiment. It is noteworthy to mention that the distributional assumption of returns is found to be important, especially for the first two moments. Finally, through an empirical illustration utilizing returns of four financial indices listed in NASDAQ stock exchange, we observe the presence of time dynamics in higher moments.  相似文献   

8.
In this paper, we study moderate deviations for random weighted sums of extended negative dependent (END) random variables, which are consistently-varying tailed and not necessarily identically distributed. When these END random variables are independent of their weights, and the weights are positive random variables with two-sided bounds, the results shows END structure and the dependence between the weights have no effects on the asymptotic behavior of moderate deviations of partial sums and random sums.  相似文献   

9.
The major problem of mean–variance portfolio optimization is parameter uncertainty. Many methods have been proposed to tackle this problem, including shrinkage methods, resampling techniques, and imposing constraints on the portfolio weights, etc. This paper suggests a new estimation method for mean–variance portfolio weights based on the concept of generalized pivotal quantity (GPQ) in the case when asset returns are multivariate normally distributed and serially independent. Both point and interval estimations of the portfolio weights are considered. Comparing with Markowitz's mean–variance model, resampling and shrinkage methods, we find that the proposed GPQ method typically yields the smallest mean-squared error for the point estimate of the portfolio weights and obtains a satisfactory coverage rate for their simultaneous confidence intervals. Finally, we apply the proposed methodology to address a portfolio rebalancing problem.  相似文献   

10.
11.
ABSTRACT

The important problem of discriminating between separate families of distributions is the theme of this work. The Bayesian significance test, FBST, is compared with the celebrated Cox test. The three families most used in survival analysis, lognormal, gamma and Weibull, are considered for the discrimination. A convex combination—with unknown weights—of the three densities is used for this discrimination. After these weights have been estimated, the one with the highest value indicates the best statistical model among the three. Another important feature considered is the parameterization used. All the three densities are written as a function of the common population mean and variance. Including the weights, the number of parameters is reduced from eight (two of each density and two of the convex combination) to four (two from the common mean and variance plus two of the weights). Some numerical results from simulations are given. In these simulations, the results of FBST are compared with those obtained with the Cox test. Two real examples properly illustrate the procedures.  相似文献   

12.
Multilevel modeling is an important tool for analyzing large-scale assessment data. However, the standard multilevel modeling will typically give biased results for such complex survey data. This bias can be eliminated by introducing design weights which must be used carefully as they can affect the results. The aim of this paper is to examine different approaches and to give recommendations concerning handling design weights in multilevel models when analyzing large-scale assessments such as TIMSS (The Trends in International Mathematics and Science Study). To achieve the goal of the paper, we examined real data from two countries and included a simulation study. The analyses in the empirical study showed that using no weights or only level 1 weights sometimes could lead to misleading conclusions. The simulation study only showed small differences in estimation of the weighted and unweighted models when informative design weights were used. The use of unscaled or not rescaled weights however caused significant differences in some parameter estimates.  相似文献   

13.
Calibration on the available auxiliary variables is widely used to increase the precision of the estimates of parameters. Singh and Sedory [Two-step calibration of design weights in survey sampling. Commun Stat Theory Methods. 2016;45(12):3510–3523.] considered the problem of calibration of design weights under two-step for single auxiliary variable. For a given sample, design weights and calibrated weights are set proportional to each other, in the first step. While, in the second step, the value of proportionality constant is determined on the basis of objectives of individual investigator/user for, for example, to get minimum mean squared error or reduction of bias. In this paper, we have suggested to use two auxiliary variables for two-step calibration of the design weights and compared the results with single auxiliary variable for different sample sizes based on simulated and real-life data set. The simulated and real-life application results show that two-auxiliary variables based two-step calibration estimator outperforms the estimator under single auxiliary variable in terms of minimum mean squared error.  相似文献   

14.
We derive forecasts for Markov switching models that are optimal in the mean square forecast error (MSFE) sense by means of weighting observations. We provide analytic expressions of the weights conditional on the Markov states and conditional on state probabilities. This allows us to study the effect of uncertainty around states on forecasts. It emerges that, even in large samples, forecasting performance increases substantially when the construction of optimal weights takes uncertainty around states into account. Performance of the optimal weights is shown through simulations and an application to U.S. GNP, where using optimal weights leads to significant reductions in MSFE. Supplementary materials for this article are available online.  相似文献   

15.
The K-means clustering method is a widely adopted clustering algorithm in data mining and pattern recognition, where the partitions are made by minimizing the total within group sum of squares based on a given set of variables. Weighted K-means clustering is an extension of the K-means method by assigning nonnegative weights to the set of variables. In this paper, we aim to obtain more meaningful and interpretable clusters by deriving the optimal variable weights for weighted K-means clustering. Specifically, we improve the weighted k-means clustering method by introducing a new algorithm to obtain the globally optimal variable weights based on the Karush-Kuhn-Tucker conditions. We present the mathematical formulation for the clustering problem, derive the structural properties of the optimal weights, and implement an recursive algorithm to calculate the optimal weights. Numerical examples on simulated and real data indicate that our method is superior in both clustering accuracy and computational efficiency.  相似文献   

16.
欧变玲等 《统计研究》2015,32(10):98-105
空间权重矩阵是描述个体间空间关系的重要工具,通常基于个体间的地理距离构造不随时间而改变的空间权重矩阵。然而,当个体间的空间关系源自经济/社会/贸易距离或人口流动性/气候等特征时,空间权重矩阵本质上可能将随时间而改变。由此,本研究提出时变空间权重矩阵面板数据模型的稳健LM检验。大量Monte Carlo模拟结果显示:从检验水平和功效角度来看,基于误设的非时变空间权重矩阵的稳健LM检验存在较大偏差,但是基于时变空间权重矩阵的稳健LM检验能够有效地识别面板数据中的空间关系类型。尤其是,在时间较长和个体较多等情况下,时变空间权重矩阵的稳健LM检验功效更高。  相似文献   

17.
Regression tends to give very unstable and unreliable regression weights when predictors are highly collinear. Several methods have been proposed to counter this problem. A subset of these do so by finding components that summarize the information in the predictors and the criterion variables. The present paper compares six such methods (two of which are almost completely new) to ordinary regression: Partial least Squares (PLS), Principal Component regression (PCR), Principle covariates regression, reduced rank regression, and two variants of what is called power regression. The comparison is mainly done by means of a series of simulation studies, in which data are constructed in various ways, with different degrees of collinearity and noise, and the methods are compared in terms of their capability of recovering the population regression weights, as well as their prediction quality for the complete population. It turns out that recovery of regression weights in situations with collinearity is often very poor by all methods, unless the regression weights lie in the subspace spanning the first few principal components of the predictor variables. In those cases, typically PLS and PCR give the best recoveries of regression weights. The picture is inconclusive, however, because, especially in the study with more real life like simulated data, PLS and PCR gave the poorest recoveries of regression weights in conditions with relatively low noise and collinearity. It seems that PLS and PCR are particularly indicated in cases with much collinearity, whereas in other cases it is better to use ordinary regression. As far as prediction is concerned: Prediction suffers far less from collinearity than recovery of the regression weights.  相似文献   

18.
针对GM(1,1)幂模型求解初始条件的优化问题,提出一种基于原始序列新旧信息的线性组合优化方法.在模拟误差平方和最小化的目标下,构建初始条件组合权重的优化模型,给出最优组合权重的解析式.最后以中国高中升学率的数据为例,验证了此优化模型的有效性和优越性.结果表明初始条件优化方法能够有效地平衡新旧信息的权重,并提高GM(1,1)幂模型的模拟和预测精度.  相似文献   

19.
This paper describes procedure for constructing a vector of regression weights. Under the regression superpopulation model, the ridge regression estimator that has minimum model mean squared error is derived. Through a simulation study, we compare the ridge regression weights, regression weights, quadratic programming weights, and raking ratio weights. The ridge regression procedure with weights bounded by zero performed very well.  相似文献   

20.
The problem of the estimation of the linear combination of weights, c′w, in a singular spring balance weighing design when the error structure takes the form E(ee′) =s?2G has been studied. A lower bound for the variance of the estimated linear combination of weights is obtained and a necessary and sufficient condition for this lower bound to be attained is given. The general results are applied to the case of the total of the weights. For a specified form for G, some optimum spring balance weighing designs for the estimated total weight are found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号