首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
L. Ferré  A. F. Yao 《Statistics》2013,47(6):475-488
Most of the usual multivariate methods have been extended to the context of functional data analysis. Our contribution concerns the study of sliced inverse regression (SIR) when the response variable is real but the regressor is a function. In the first part, we show how the relevant properties of SIR remain essentially the same in the functional context under suitable conditions. Unfortunately, the estimation procedure used in the multivariate case cannot be directly transposed to the functional one. Then, we propose a solution that overcomes this difficulty and we show the consistency of the estimates of the parameters of the model.  相似文献   

2.
A multivariate linear calibration problem, in which response variable is multivariate and explanatory variable is univariate, is considered. In this paper a class of generalized inverse regression estimators is proposed in multi-univariate linear calibration. It includes the classical estimator and the inverse regression one (or Krutchkoff estimator). For the proposed estimator we derive the expressions of bias and mean square error (MSE). Furthermore the behavior of these characteristics is investigated through an analytical method. In addition through a numerical study we confirm the existence of a generalized inverse regression estimator to improve both the classical and the inverse regression estimators on the MSE criterion.  相似文献   

3.
In this paper we consider a semiparametric regression model involving a d-dimensional quantitative explanatory variable X and including a dimension reduction of X via an index βX. In this model, the main goal is to estimate the Euclidean parameter β and to predict the real response variable Y conditionally to X. Our approach is based on sliced inverse regression (SIR) method and optimal quantization in Lp-norm. We obtain the convergence of the proposed estimators of β and of the conditional distribution. Simulation studies show the good numerical behavior of the proposed estimators for finite sample size.  相似文献   

4.
5.
It is shown that the sliced inverse regression procedure proposed by Li corresponds to the maximum likelihood estimate where the observations in each slice are samples of multivariate normal distributions with means in an affine manifold.  相似文献   

6.
The presence of multicollinearity among the explanatory variables has undesirable effects on the maximum likelihood estimator (MLE). Ridge estimator (RE) is a widely used estimator in overcoming this issue. The RE enjoys the advantage that its mean squared error (MSE) is less than that of MLE. The inverse Gaussian regression (IGR) model is a well-known model in the application when the response variable positively skewed. The purpose of this paper is to derive the RE of the IGR under multicollinearity problem. In addition, the performance of this estimator is investigated under numerous methods for estimating the ridge parameter. Monte Carlo simulation results indicate that the suggested estimator performs better than the MLE estimator in terms of MSE. Furthermore, a real chemometrics dataset application is utilized and the results demonstrate the excellent performance of the suggested estimator when the multicollinearity is present in IGR model.  相似文献   

7.
We consider a regression analysis of multivariate response on a vector of predictors. In this article, we develop a sliced inverse regression-based method for reducing the dimension of predictors without requiring a prespecified parametric model. Our proposed method preserves as much regression information as possible. We derive the asymptotic weighted chi-squared test for dimension. Simulation results are reported and comparisons are made with three methods—most predictable variates, k-means inverse regression and canonical correlation approach.  相似文献   

8.
A new estimation method for the dimension of a regression at the outset of an analysis is proposed. A linear subspace spanned by projections of the regressor vector X , which contains part or all of the modelling information for the regression of a vector Y on X , and its dimension are estimated via the means of parametric inverse regression. Smooth parametric curves are fitted to the p inverse regressions via a multivariate linear model. No restrictions are placed on the distribution of the regressors. The estimate of the dimension of the regression is based on optimal estimation procedures. A simulation study shows the method to be more powerful than sliced inverse regression in some situations.  相似文献   

9.
Summary. Many geophysical regression problems require the analysis of large (more than 104 values) data sets, and, because the data may represent mixtures of concurrent natural processes with widely varying statistical properties, contamination of both response and predictor variables is common. Existing bounded influence or high breakdown point estimators frequently lack the ability to eliminate extremely influential data and/or the computational efficiency to handle large data sets. A new bounded influence estimator is proposed that combines high asymptotic efficiency for normal data, high breakdown point behaviour with contaminated data and computational simplicity for large data sets. The algorithm combines a standard M -estimator to downweight data corresponding to extreme regression residuals and removal of overly influential predictor values (leverage points) on the basis of the statistics of the hat matrix diagonal elements. For this, the exact distribution of the hat matrix diagonal elements p ii for complex multivariate Gaussian predictor data is shown to be β ( p ii ,  m ,  N − m ), where N is the number of data and m is the number of parameters. Real geophysical data from an auroral zone magnetotelluric study which exhibit severe outlier and leverage point contamination are used to illustrate the estimator's performance. The examples also demonstrate the utility of looking at both the residual and the hat matrix distributions through quantile–quantile plots to diagnose robust regression problems.  相似文献   

10.
Summary.  The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis.  相似文献   

11.
Variable selection in regression analysis is of importance because it can simplify model and enhance predictability. After variable selection, however, the resulting working model may be biased when it does not contain all of significant variables. As a result, the commonly used parameter estimation is either inconsistent or needs estimating high-dimensional nuisance parameter with very strong assumptions for consistency, and the corresponding confidence region is invalid when the bias is relatively large. We in this paper introduce a simulation-based procedure to reformulate a new model so as to reduce the bias of the working model, with no need to estimate high-dimensional nuisance parameter. The resulting estimators of the parameters in the working model are asymptotic normally distributed whether the bias is small or large. Furthermore, together with the empirical likelihood, we build simulation-based confidence regions for the parameters in the working model. The newly proposed estimators and confidence regions outperform existing ones in the sense of consistency.  相似文献   

12.
In the multiple linear regression analysis, the ridge regression estimator and the Liu estimator are often used to address multicollinearity. Besides multicollinearity, outliers are also a problem in the multiple linear regression analysis. We propose new biased estimators based on the least trimmed squares (LTS) ridge estimator and the LTS Liu estimator in the case of the presence of both outliers and multicollinearity. For this purpose, a simulation study is conducted in order to see the difference between the robust ridge estimator and the robust Liu estimator in terms of their effectiveness; the mean square error. In our simulations, the behavior of the new biased estimators is examined for types of outliers: X-space outlier, Y-space outlier, and X-and Y-space outlier. The results for a number of different illustrative cases are presented. This paper also provides the results for the robust ridge regression and robust Liu estimators based on a real-life data set combining the problem of multicollinearity and outliers.  相似文献   

13.
It is well known that the ratio and product estimators have the limitation of having efficiency not exceeding that of the linear regression estimator. This paper develops a new approach to ratio estimation that produces a more precise and efficient ratio estimator that is superior to the regression estimator both in efficiency and biasedness. An empirical study is given.  相似文献   

14.
In this article, we consider the problem of variable selection in linear regression when multicollinearity is present in the data. It is well known that in the presence of multicollinearity, performance of least square (LS) estimator of regression parameters is not satisfactory. Consequently, subset selection methods, such as Mallow's Cp, which are based on LS estimates lead to selection of inadequate subsets. To overcome the problem of multicollinearity in subset selection, a new subset selection algorithm based on the ridge estimator is proposed. It is shown that the new algorithm is a better alternative to Mallow's Cp when the data exhibit multicollinearity.  相似文献   

15.
In this article, a two-parameter estimator is proposed to combat multicollinearity in the negative binomial regression model. The proposed two-parameter estimator is a general estimator which includes the maximum likelihood (ML) estimator, the ridge estimator (RE) and the Liu estimator as special cases. Some properties on the asymptotic mean-squared error (MSE) are derived and necessary and sufficient conditions for the superiority of the two-parameter estimator over the ML estimator and sufficient conditions for the superiority of the two-parameter estimator over the RE and the Liu estimator in the asymptotic mean-squared error (MSE) matrix sense are obtained. Furthermore, several methods and three rules for choosing appropriate shrinkage parameters are proposed. Finally, a Monte Carlo simulation study is given to illustrate some of the theoretical results.  相似文献   

16.
It is well-known in the literature on multicollinearity that one of the major consequences of multicollinearity on the ordinary least squares estimator is that the estimator produces large sampling variances, which in turn might inappropriately lead to exclusion of otherwise significant coefficients from the model. To circumvent this problem, two accepted estimation procedures which are often suggested are the restricted least squares method and the ridge regression method. While the former leads to a reduction in the sampling variance of the estimator, the later ensures a smaller mean square error value for the estimator. In this paper we have proposed a new estimator which is based on a criterion that combines the ideas underlying these two estimators. The standard properties of this new estimator have been studied in the paper. It has also been shown that this estimator is superior to both the restricted least squares as well as the ordinary ridge regression estimators by the criterion of mean sauare error of the estimator of the regression coefficients when the restrictions are indeed correct. The conditions for superiority of this estimator over the other two have also been derived for the situation when the restrictions are not correct.  相似文献   

17.
Abstract

This article presents a non-stochastic version of the Generalized Ridge Regression estimator that arises from a discussion of the properties of a Generalized Ridge Regression estimator whose shrinkage parameters are found to be close to their upper bounds. The resulting estimator takes the form of a shrinkage estimator that is superior to both the Ordinary Least Squares estimator and the James-Stein estimator under certain conditions. A numerical study is provided to investigate the range of signal to noise ratio under which the new estimator dominates the James-Stein estimator with respect to the prediction mean square error.  相似文献   

18.
In this paper a new robust estimator, modified median estimator, is introduced and studied for the logistic regression model. This estimator is based on the median estimator considered in Hobza et al. [Robust median estimator in logistic regression. J Stat Plan Inference. 2008;138:3822–3840]. Its asymptotic distribution is obtained. Using the modified median estimator, we also consider a Wald-type test statistic for testing linear hypotheses in the logistic regression model and we obtain its asymptotic distribution under the assumption of random regressors. An extensive simulation study is presented in order to analyse the efficiency as well as the robustness of the modified median estimator and Wald-type test based on it.  相似文献   

19.
In regression analysis, to deal with the problem of multicollinearity, the restricted principal components regression estimator is proposed. In this paper, we compared the restricted principal components regression estimator, the principal components regression estimator, and the ordinary least-squares estimator with each other under the Pitman's closeness criterion. We showed that the restricted principal components regression estimator is always superior to the principal components regression estimator, under certain conditions the restricted principal components regression estimator is superior to the ordinary least-squares estimator under the Pitman's closeness criterion and under certain conditions the principal components regression estimator is superior to the ordinary least-squares estimator under the Pitman's closeness criterion.  相似文献   

20.
New bounds are obtained for the variance of the minimum variance unbiased estimator of p i n inverse sampling. A generalized procedure for further improving the bounds is also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号