首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We introduce an estimator for the population mean based on maximizing likelihoods formed from a symmetric kernel density estimate. Due to these origins, we have dubbed the estimator the symmetric maximum kernel likelihood estimate (smkle). A speedy computational method to compute the smkle based on binning is implemented in a simulation study which shows that the smkle at an optimal bandwidth is decidedly superior in terms of efficiency to the sample mean and other measures of location for heavy-tailed symmetric distributions. An empirical rule and a computational method to estimate this optimal bandwidth are developed and used to construct bootstrap confidence intervals for the population mean. We show that the intervals have approximately nominal coverage and have significantly smaller average width than the corresponding intervals for other measures of location.  相似文献   

2.
The authors propose to estimate nonlinear small area population parameters by using the empirical Bayes (best) method, based on a nested error model. They focus on poverty indicators as particular nonlinear parameters of interest, but the proposed methodology is applicable to general nonlinear parameters. They use a parametric bootstrap method to estimate the mean squared error of the empirical best estimators. They also study small sample properties of these estimators by model‐based and design‐based simulation studies. Results show large reductions in mean squared error relative to direct area‐specific estimators and other estimators obtained by “simulated” censuses. The authors also apply the proposed method to estimate poverty incidences and poverty gaps in Spanish provinces by gender with mean squared errors estimated by the mentioned parametric bootstrap method. For the Spanish data, results show a significant reduction in coefficient of variation of the proposed empirical best estimators over direct estimators for practically all domains. The Canadian Journal of Statistics 38: 369–385; 2010 © 2010 Statistical Society of Canada  相似文献   

3.
This paper presents and applies a local generalized method of moments (LGMM) estimator for regression functions. The method is an extension of previous results obtained by Gozalo and Linton. The LGMM estimation procedure can be applied to estimate a mean regression function and its derivatives at an interior point x , without making explicit assumptions about its functional form. The method has been applied to estimate dynamic models based on panel data.  相似文献   

4.
This article examines methods to efficiently estimate the mean response in a linear model with an unknown error distribution under the assumption that the responses are missing at random. We show how the asymptotic variance is affected by the estimator of the regression parameter, and by the imputation method. To estimate the regression parameter, the ordinary least squares is efficient only if the error distribution happens to be normal. If the errors are not normal, then we propose a one step improvement estimator or a maximum empirical likelihood estimator to efficiently estimate the parameter.To investigate the imputation’s impact on the estimation of the mean response, we compare the listwise deletion method and the propensity score method (which do not use imputation at all), and two imputation methods. We demonstrate that listwise deletion and the propensity score method are inefficient. Partial imputation, where only the missing responses are imputed, is compared to full imputation, where both missing and non-missing responses are imputed. Our results reveal that, in general, full imputation is better than partial imputation. However, when the regression parameter is estimated very poorly, the partial imputation will outperform full imputation. The efficient estimator for the mean response is the full imputation estimator that utilizes an efficient estimator of the parameter.  相似文献   

5.
A simple least squares method for estimating a change in mean of a sequence of independent random variables is studied. The method first tests for a change in mean based on the regression principle of constrained and unconstrained sums of squares. Conditionally on a decision by this test that a change has occurred, least squares estimates are used to estimate the change point, the initial mean level (prior to the change point) and the change itself. The estimates of the initial level and change are functions of the change point estimate. All estimates are shown to be consistent, and those for the initial level and change are shown to be asymptotically jointly normal. The method performs well for moderately large shifts (one standard deviation or more), but the estimates of the initial level and change are biased in a predictable way for small shifts. The large sample theory is helpful in understanding this problem. The asymptotic distribution of the change point estimator is obtained for local shifts in mean, but the case of non-local shifts appears analytically intractable.  相似文献   

6.
The balanced half-sample and jackknife variance estimation techniques are used to estimate the variance of the combined ratio estimate. An empirical sampling study is conducted using computer-generated populations to investigate the variance, bias and mean square error of these variance estimators and results are compared to theoretical results derived elsewhere for the linear case. Results indicate that either the balanced half-sample or jackknife method may be used effectively for estimating the variance of the combined ratio estimate.  相似文献   

7.
在含有极端值总体中,由于样本均值不具有耐抗性,往往难以代表“平均水平”,因此样本方差也难以有效衡量离散程度。在简单随机抽样假设下,可以构造一个考虑极大值和极小值对样本均值大小影响作用不同时的调整均值估计量,并给出了其期望与方差。根据方差最小原则,确定估计量中的参数。随后的统计模拟比较了各种估计量的表现,结果表明:调整的估计量是稳健的和有效的。  相似文献   

8.
In this paper we propose a Bezier curve method to estimate the survival function and the median survival time in interval-censored data. We compare the proposed estimator with other existing methods such as the parametric method, the single point imputation method, and the nonparametric maximum likelihood estimator through extensive numerical studies, and it is shown that the proposed estimator performs better than others in the sense of mean squared error and mean integrated squared error. An illustrative example based on a real data set is given.  相似文献   

9.
Purposive sampling is described as a random selection of sampling units within the segment of the population with the most information on the characteristic of interest. Nonparametric bootstrap is proposed in estimating location parameters and the corresponding variances. An estimate of bias and a measure of variance of the point estimate are computed using the Monte Carlo method. The bootstrap estimator of the population mean is efficient and consistent in the homogeneous, heterogeneous, and two-segment populations simulated. The design-unbiased approximation of the standard error estimate differs substantially from the bootstrap estimate in severely heterogeneous and positively skewed populations.  相似文献   

10.
The consistency and asymptotic normality of a linear least squares estimate of the form (X'X)-X'Y when the mean is not Xβ is investigated in this paper. The least squares estimate is a consistent estimate of the best linear approximation of the true mean function for the design chosen. The asymptotic normality of the least squares estimate depends on the design and the asymptotic mean may not be the best linear approximation of the true mean function. Choices of designs which allow large sample inferences to be made about the best linear approximation of the true mean function are discussed.  相似文献   

11.
Inference based on the Central Limit Theorem has only first order accuracy. We give tests and confidence intervals (CIs) of second orderaccuracy for the shape parameter ρ of a gamma distribution for both the unscaled and scaled cases.

Tests and CIs based on moment and cumulant estimates are considered as well as those based on the maximum likelihood estimate (MLE).

For the unscaled case the MLE is the moment estimate of order zero; the most efficient moment estimate of integral order is the sample mean, having asymptotic relative efficiency (ARE) .61 when ρ= 1.

For the scaled case the most efficient moment estimate is a functionof the mean and variance. Its ARE is .39 when ρ = 1.

Our motivation for constructing these tests of ρ = 1 and CIs forρ is to provide a simple and convenient method for testing whether a distribution is exponential in situations such as rainfall models where such an assumption is commonly made.  相似文献   

12.
In this paper we propose a bootstrap-based method to estimate the standard error of adaptive estimators. We apply it in the standard problem of location estimation discussed in Randies and Hogg (1973) and in Hogg and Lenth (1984). Our adaptive estimator is based on a choice between the mean the 35% trimmed mean and the median. Finally, we carry out a simulation study to see how well the proposed method performs in small and moderate sample sizes.  相似文献   

13.
In some vegetation types, total fuel loading (phytomass) can be predicted by easily-measured variables such as vegetation type and height. A double sampling scheme is proposed in which fuel loading is estimated on a particular site by using quadrat sampling within patches of similar vegetation to develop a general prediction equation, and then line intercept sampling is used to estimate the mean of the easily-measured variables on the site. This method is applied to estimate the total fine fuel loading on a heathland site.  相似文献   

14.
The method of moments has been widely used as a simple alternative to the maximum likelihood method, mainly because of its efficiency and simplicity in obtaining parameter estimators of a mixture of two binomial distributions. In this paper, an alternative estimate is proposed which is as competitive as of the method of moments when comparing the mean squared error and computational effort.  相似文献   

15.
ABSTRACT

Local linear estimator is a popularly used method to estimate the non-parametric regression functions, and many methods have been derived to estimate the smoothing parameter, or the bandwidth in this case. In this article, we propose an information criterion-based bandwidth selection method, with the degrees of freedom originally derived for non-parametric inferences. Unlike the plug-in method, the new method does not require preliminary parameters to be chosen in advance, and is computationally efficient compared to the cross-validation (CV) method. Numerical study shows that the new method performs better or comparable to existing plug-in method or CV method in terms of the estimation of the mean functions, and has lower variability than CV selectors. Real data applications are also provided to illustrate the effectiveness of the new method.  相似文献   

16.
Chen and Jernigan proposed a non-parametric, conservative method that involved using a penalized mean to estimate the average concentration of contaminants in soils. The method assumes a random sample obtained from a whole site involved in the US Superfund program. However, in some cases, about 10% of known data are collected from the 'hot spots'. In this paper, two procedures are proposed to use the information from hot spots data or an extreme value to estimate the mean concentration of contaminants. These procedures are evaluated using a data set of chromium concentrations from one of the Environmental Protection Agency's toxic waste sites. The simulation results show that these new procedures are cost-eff ective.  相似文献   

17.
In the analysis of time-to-event data, restricted mean survival time has been well investigated in the literature and provided by many commercial software packages, while calculating mean survival time remains as a challenge due to censoring or insufficient follow-up time. Several researchers have proposed a hybrid estimator of mean survival based on the Kaplan–Meier curve with an extrapolated tail. However, this approach often leads to biased estimate due to poor estimate of the parameters in the extrapolated “tail” and the large variability associated with the tail of the Kaplan–Meier curve due to small set of patients at risk. Two key challenges in this approach are (1) where the extrapolation should start and (2) how to estimate the parameters for the extrapolated tail. The authors propose a novel approach to calculate mean survival time to address these two challenges. In the proposed approach, an algorithm is used to search if there are any time points where the hazard rates change significantly. The survival function is estimated by the Kaplan–Meier method prior to the last change point and approximated by an exponential function beyond the last change point. The parameter in the exponential function is estimated locally. Mean survival time is derived based on this survival function. The simulation and case studies demonstrated the superiority of the proposed approach.  相似文献   

18.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

19.
The Burr XII distribution offers a more flexible alternative to the lognormal, log-logistic and Weibull distributions. Outliers can occur during reliability life testing. Thus, we need an efficient method to estimate the parameters of the Burr XII distribution for censored data with outliers. The objective of this paper is to present a robust regression (RR) method called M-estimator to estimate the parameters of a two-parameter Burr XII distribution based on the probability plotting procedure for both the complete and multiply-censored data with outliers. The simulation results show that the RR method outperforms the unweighted least squares and maximum likelihood methods in most cases in terms of bias and errors in the root mean square.  相似文献   

20.
There has been growing interest in partial identification of probability distributions and parameters. This paper considers statistical inference on parameters that are partially identified because data are incompletely observed, due to nonresponse or censoring, for instance. A method based on likelihood ratios is proposed for constructing confidence sets for partially identified parameters. The method can be used to estimate a proportion or a mean in the presence of missing data, without assuming missing-at-random or modeling the missing-data mechanism. It can also be used to estimate a survival probability with censored data without assuming independent censoring or modeling the censoring mechanism. A version of the verification bias problem is studied as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号