首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Using mean absolute deviation, we compare the efficay of two new parametric conditional error rate estimators with six others, four of which are well known.The performance of both new estimators is found to be superior to the six competing estimators examined in this paper, especially when the ratio of the training sample size to the feature dimensionality is small.  相似文献   

2.
We suggest a generalized spatial system GMM (SGMM) estimation for short dynamic panel data models with spatial errors and fixed effects when n is large and T is fixed (usually small). Monte Carlo studies are conducted to evaluate the finite sample properties with the quasi-maximum likelihood estimation (QMLE). The results show that, QMLE, with a proper approximation for initial observation, performs better than SGMM in general cases. However, it performs poorly when spatial dependence is large. QMLE and SGMM perform better for different parameters when there is unknown heteroscedasticity in the disturbances and the data are highly persistent. Both estimates are not sensitive to the treatment of initial values. Estimation of the spatial autoregressive parameter is generally biased when either the data are highly persistent or spatial dependence is large. Choices of spatial weights matrices and the sign of spatial dependence do affect the performance of the estimates, especially in the case of the heteroscedastic disturbance. We also give empirical guidelines for the model.  相似文献   

3.
Summary. We use cumulants to derive Bayesian credible intervals for wavelet regression estimates. The first four cumulants of the posterior distribution of the estimates are expressed in terms of the observed data and integer powers of the mother wavelet functions. These powers are closely approximated by linear combinations of wavelet scaling functions at an appropriate finer scale. Hence, a suitable modification of the discrete wavelet transform allows the posterior cumulants to be found efficiently for any given data set. Johnson transformations then yield the credible intervals themselves. Simulations show that these intervals have good coverage rates, even when the underlying function is inhomogeneous, where standard methods fail. In the case where the curve is smooth, the performance of our intervals remains competitive with established nonparametric regression methods.  相似文献   

4.
In this paper, we propose and evaluate the performance of different parametric and nonparametric estimators for the population coefficient of variation considering Ranked Set Sampling (RSS) under normal distribution. The performance of the proposed estimators was assessed based on the bias and relative efficiency provided by a Monte Carlo simulation study. An application in anthropometric measurements data from a human population is also presented. The results showed that the proposed estimators via RSS present an expressively lower mean squared error when compared to the usual estimator, obtained via Simple Random Sampling. Also, it was verified the superiority of the maximum likelihood estimator, given the necessary assumptions of normality and perfect ranking are met.  相似文献   

5.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   

6.
This paper considers estimation of an unknown distribution parameter in situations where we believe that the parameter belongs to a finite interval. We propose for such situations an interval shrinkage approach which combines in a coherent way an unbiased conventional estimator and non-sample information about the range of plausible parameter values. The approach is based on an infeasible interval shrinkage estimator which uniformly dominates the underlying conventional estimator with respect to the mean square error criterion. This infeasible estimator allows us to obtain useful feasible counterparts. The properties of these feasible interval shrinkage estimators are illustrated both in a simulation study and in empirical examples.  相似文献   

7.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   

8.
Andrews et al (1972) carried out an extensive Monte Carlo study of robust estimators of location. Their conclusions were that the hampel and the skipped estimates, as classes, seemed to be preferable to some of the other currently fashionable estimators. The present study extends this work to include estimators not previously examined. The estimators are compared over short-tailed as well as long-tailed alternatives and also over some dependent data generated by first-order autoregressive schemes. The conclusions of the present study are threefold. First, from our limited study, none of the so-called robust estimators are very efficient over short-tailed situations. More work seems to be necessary in this situation. Second, none of the estimators perform very well in dependent data situations, particularly when the correlation is large and positive. This seems to be a rather pressing problem. Finally, for long-tailed alternatives, the hampel estimators and Hogg-type adaptive versions of the hampels are the strongest classes. The adaptive hampels neither uniformly outperform nor are they outperformed by the hampels. However, the superiority in terms of maximum relative efficiency goes to the adaptive hampels. That is, the adaptive hampels, under their worst performance.  相似文献   

9.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.

  相似文献   

10.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.  相似文献   

11.
It is proved that the unbiased estimator of survival probability in a multiply censored sample suggested by Pavlov & Ushakov (1980, 1984) is equivalent to the Kaplan-Meier Product-Limit estimator.  相似文献   

12.
We consider the probability-weighted moment and the maximum-likelihood estimators of two parameters in the log-logistic distribution. Quantile estimators are obtained using both methods. The distributional properties of these estimators are studied in large samples, via asymptotic theory, and in small and moderate samples, via Monte Carlo simulation. The distribution is shown to be appropriate for a wide variety of meteorological data.  相似文献   

13.
The use of cross-validation is considered in conjunction with orthogonal series estimators for a probability density function. We attempt to establish a data-based procedure which will select both the optimal choice of series, and the best trade-off between bias-squared and variance, i.e. series length. Although the expected value of the estimator looks promising, the rate of convergence is very slow. Simulations illustrate the theoretical results.  相似文献   

14.
Ridge regression is re-examined and ridge estimators based on prior information are introduced. A necessary and sufficient condition is given for such ridge estimators to yield estimators of every nonnull linear combination of the regression coefficients with smaller mean square error than that of the Gauss-Markov best linear unbiased estimator.  相似文献   

15.
It is shown that a necessary and sufficient condition derived by Farebrother (1984)for a generalized ridge estimator to dominate the ordinary least-squares estimator with respect to the mean-square-error-matrix criterion in the linear regression model admits a similar interpretation as the well known criterion of Toro-Viz-carrondo and Wallace (1968)for the dominance of a restricted least-squares estimator over the ordinary least-squares estimator. Two other properties of the generalized ridge estimators, referring to the concept of admissibility, are also pointed out.  相似文献   

16.
In this note, we report a dramatic improvement in the computational efficiency of semiparametric generalized least squares(SGLS) estimation. Computation of SGLS estimates no longer presents serious problems with data sets of moderate size. We also correct a numerical error in the standard errors of the SGLS estimates reported in our recent paper in this journal (Horowitz and Neumann, 1987). The corrected standard errors of SGLS are comparable to those we reported for quantile estimates.  相似文献   

17.
In the paper homogeneous linear estimators of the parameter vector of the general linear model are compared in terms of their MSE matrices. A necessary and sufficient condition for the difference of two MSE matrices to be positive definite is obtained and its practical existence discussed. The non-negative definiteness of the difference also receives attention, and conditions for this case are discussed. The absence of any conditions of the above type is taken into consideration as well.  相似文献   

18.
In this note, we report a dramatic improvement in the computational efficiency of semiparametric generalized least squares(SGLS) estimation. Computation of SGLS estimates no longer presents serious problems with data sets of moderate size. We also correct a numerical error in the standard errors of the SGLS estimates reported in our recent paper in this journal (Horowitz and Neumann, 1987). The corrected standard errors of SGLS are comparable to those we reported for quantile estimates.  相似文献   

19.
This paper gives necessary and sufficient conditions for a mixed regression estimator to be superior to another mixed estimator. The comparisons are based on the mean square error matrices of the estimators. Both estimators are allowed to be biased.  相似文献   

20.
A multinomial classification rule is proposed based on a prior-valued smoothing for the state probabilities. Asymptotically, the proposed rule has an error rate that converges uniformly and strongly to that of the Bayes rule. For a fixed sample size the prior-valued smoothing is effective in obtaining reason¬able classifications to the situations such as missing data. Empirically, the proposed rule is compared favorably with other commonly used multinomial classification rules via Monte Carlo sampling experiments  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号