首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data are often collected in histogram form, especially in the context of computer simulation. While requiring less memory and computation than saving all observations, the grouping of observations in the histogram cells complicates statistical estimation of parameters of interest. In this paper the mean and variance of the cell midpoint estimator of the pth quantile are analyzed in terms of distribution, cell width, and sample size. Three idiosyncrasies of using cell midpoints to estimate quantiles are illustrated. The results tend to run counter to previously published results for grouped data.  相似文献   

2.
In this paper we consider inference of parameters in time series regression models. In the traditional inference approach, the heteroskedasticity and autocorrelation consistent (HAC) estimation is often involved to consistently estimate the asymptotic covariance matrix of regression parameter estimator. Since the bandwidth parameter in the HAC estimation is difficult to choose in practice, there has been a recent surge of interest in developing bandwidth-free inference methods. However, existing simulation studies show that these new methods suffer from severe size distortion in the presence of strong temporal dependence for a medium sample size. To remedy the problem, we propose to apply the prewhitening to the inconsistent long-run variance estimator in these methods to reduce the size distortion. The asymptotic distribution of the prewhitened Wald statistic is obtained and the general effectiveness of prewhitening is shown through simulations.  相似文献   

3.
In this note we define a composite quantile function estimator in order to improve the accuracy of the classical bootstrap procedure in small sample setting. The composite quantile function estimator employs a parametric model for modelling the tails of the distribution and uses the simple linear interpolation quantile function estimator to estimate quantiles lying between 1/(n+1) and n/(n+1). The method is easily programmed using standard software packages and has general applicability. It is shown that the composite quantile function estimator improves the bootstrap percentile interval coverage for a variety of statistics and is robust to misspecification of the parametric component. Moreover, it is also shown that the composite quantile function based approach surprisingly outperforms the parametric bootstrap for a variety of small sample situations.  相似文献   

4.
A new, fully data-driven bandwidth selector with a double smoothing (DS) bias term and a data-driven variance estimator is developed following the bootstrap idea. The data-driven variance estimation does not involve any additional bandwidth selection. The proposed bandwidth selector convergences faster than a plug-in one due to the DS bias estimate, whereas the data-driven variance improves its finite sample performance clearly and makes it stable. Asymptotic results of the proposals are obtained. A comparative simulation study was done to show the overall gains and the gains obtained by improving either the bias term or the variance estimate, respectively. It is shown that the use of a good variance estimator is more important when the sample size is relatively small.  相似文献   

5.
In discrete event simulation, the method of control variates is often used to reduce the variance of estimation for the mean of the output response. In the present paper, it is shown that when three or more control variates are used, the usual linear regression estimator of the mean response is one of a large class of unbiased estimators, many of which have smaller variance than the usual estimator. In simulation studies using control variates, a confidence interval for the mean response is typically reported as well. Intervals with shorter width have been proposed using control variates in the literature. The present paper however develops confidence intervals which not only have shorter width but also have higher coverage probability than the usual confidence interval  相似文献   

6.
This paper is concerned with methods of reducing variability and computer time in a simulation study. The Monte Carlo swindle, through mathematical manipulations, has been shown to yield more precise estimates than the “naive” approach. In this study computer time is considered in conjunction with the variance estimates. It is shown that by this measure the naive method is often a viable alternative to the swindle. This study concentrates on the problem of estimating the variance of an estimator of location. The advantage of one technique over another depends upon the location estimator, the sample size, and the underlying distribution. For a fixed number of samples, while the naive method gives a less precise estimate than the swindle, it requires fewer computations. In addition, for certain location estimators and distributions, the naive method is able to take advantage of certain shortcuts in the generation of each sample. The small amount of time required by this “enlightened” naive method often more than compensates for its relative lack of precision.  相似文献   

7.
In this article, we propose a kernel-based estimator for the finite-dimensional parameter of a partially additive linear quantile regression model. For dependent processes that are strictly stationary and absolutely regular, we establish a precise convergent rate and show that the estimator is root-n consistent and asymptotically normal. To help facilitate inferential procedures, a consistent estimator for the asymptotic variance is also provided. In addition to conducting a simulation experiment to evaluate the finite sample performance of the estimator, an application to US inflation is presented. We use the real-data example to motivate how partially additive linear quantile models can offer an alternative modeling option for time-series data.  相似文献   

8.
The problem of constructing confidence intervals to estimate the mean in a two-stage nested model is considered. Several approximate intervals, which are based on both linear and nonlinear estimators of the mean are investigated. In particular, the method of bootstrap is used to correct the bias in the ‘usual’ variance of the nonlinear estimators. It is found that the intervals based on the nonlinear estimators did not achieve the nominal confidence coefficient for designs involving a small number of groups. Further, it turns out that the intervals are generally conservative, especially at small values of the intraclass correlation coefficient, and that the intervals based on the nonlinear estimators are more conservative than those based on the linear estimators. Compared with the others, the intervals based on the unweighted mean of the group means performed well in terms of coverage and length. For small values of the intraclass correlation coefficient, the ANOVA estimators of the variance components are recommended, otherwise the unweighted means estimator of the between groups variance component should be used. If one is fortunate enough to have control over the design, he is advised to increase the number of groups, as opposed to increasing group sizes, while avoiding groups of size one or two.  相似文献   

9.
In this paper, a new estimator for a conditional quantile is proposed by using the empirical likelihood method and local linear fitting when some auxiliary information is available. The asymptotic normality of the estimator at both boundary and interior points is established. It is shown that the asymptotic variance of the proposed estimator is smaller than those of the usual kernel estimators at interior points, and that the proposed estimator has the desired sampling properties at both boundary and interior points. Therefore, no boundary modifications are required in our estimation.  相似文献   

10.
Weighted log‐rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics‐guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided.  相似文献   

11.
In many applications (geosciences, insurance, etc.), the peaks-over-thresholds (POT) approach is one of the most widely used methodology for extreme quantile inference. It mainly consists of approximating the distribution of exceedances above a high threshold by a generalized Pareto distribution (GPD). The number of exceedances which is used in the POT inference is often quite small and this leads typically to a high volatility of the estimates. Inspired by perfect sampling techniques used in simulation studies, we define a folding procedure that connects the lower and upper parts of a distribution. A new extreme quantile estimator motivated by this theoretical folding scheme is proposed and studied. Although the asymptotic behaviour of our new estimate is the same as the classical (non-folded) one, our folding procedure reduces significantly the mean squared error of the extreme quantile estimates for small and moderate samples. This is illustrated in the simulation study. We also apply our method to an insurance dataset.  相似文献   

12.
In this paper, the delete-mj jackknife estimator is proposed. This estimator is based on samples obtained from the original sample by successively removing mutually exclusive groups of unequal size. In a Monte Carlo simulation study, a hierarchical linear model was used to evaluate the role of nonnormal residuals and sample size on bias and efficiency of this estimator. It is shown that bias is reduced in exchange for a minor reduction in efficiency. The accompanying jackknife variance estimator even improves on both bias and efficiency, and, moreover, this estimator is mean-squared-error consistent, whereas the maximum likelihood equivalents are not.  相似文献   

13.
It is known that for nonparametric regression, local linear composite quantile regression (local linear CQR) is a more competitive technique than classical local linear regression since it can significantly improve estimation efficiency under a class of non-normal and symmetric error distributions. However, this method only applies to symmetric errors because, without symmetric condition, the estimation bias is non-negligible and therefore the resulting estimator is inconsistent. In this paper, we propose a weighted local linear CQR method for general error conditions. This method applies to both symmetric and asymmetric random errors. Because of the use of weights, the estimation bias is eliminated asymptotically and the asymptotic normality is established. Furthermore, by minimizing asymptotic variance, the optimal weights are computed and consequently the optimal estimate (the most efficient estimate) is obtained. By comparing relative efficiency theoretically or numerically, we can ensure that the new estimation outperforms the local linear CQR estimation. Finite sample behaviors conducted by simulation studies further illustrate the theoretical findings.  相似文献   

14.
We show that the jackknife technique fails badly when applied to the problem of estimating the variance of a sample quantile. When viewed as a point estimator, the jackknife estimator is known to be inconsistent. We show that the ratio of the jackknife variance estimate to the true variance has an asymptotic Weibull distribution with parameters 1 and 1/2. We also show that if the jackknife variance estimate is used to Studentize the sample quantile, the asymptotic distribution of the resulting Studentized statistic is markedly nonnormal, having infinite mean. This result is in stark contrast with that obtained in simpler problems, such as that of constructing confidence intervals for a mean, where the jackknife-Studentized statistic has an asymptotic standard normal distribution.  相似文献   

15.
For a two variance component mixed linear model, it is shown that under suitable conditions there exists a nonlinear unbiased estimator that is better than a best linear unbiased estimator defined with respect to a given singular covariance matrix. It is also shown how this result applies to improving on intra-block estimators and on estimators like the unweighted means estimator in a random one-way model.  相似文献   

16.
It is well known that, given observable data for a competing risk problem, there is always an independent model consistent with the data. It has been pointed out, however, that this independent model does not necessarily have to be one with proper marginals. One purpose of this paper is to explore the extent to which one might try to use the non-parametric assumption that the marginals are proper in order to test whether or not independence holds. This will lead us naturally to a closely related estimation problem—how we estimate the marginals given that a certain quantile of one variable is reached at the same time as a given quantile of the other variable. The problem will be considered using the copula-based approach of Zheng and Klein. Two different methods are discussed. One is a non-parametric maximum likelihood method. The other is a consistent estimator, called the bilinear adjustment estimator, that can be computed quickly and which thus lends itself more readily to simulation methods such as bootstrapping.The ultimate objective of the work in this paper is to provide an additional analytical tool to understand the effectiveness of preventive maintenance. Preventive maintenance censors component failure data and thus provides an important example of competing risk within the reliability area.  相似文献   

17.
Quantile function plays an important role in statistical inference, and intermediate quantile is useful in risk management. It is known that Jackknife method fails for estimating the variance of a sample quantile. By assuming that the underlying distribution satisfies some extreme value conditions, we show that Jackknife variance estimator is inconsistent for an intermediate order statistic. Further we derive the asymptotic limit of the Jackknife-Studentized intermediate order statistic so that a confidence interval for an intermediate quantile can be obtained. A simulation study is conducted to compare this new confidence interval with other existing ones in terms of coverage accuracy.  相似文献   

18.
In this article, a robust multistage parameter estimator is proposed for nonlinear regression with heteroscedastic variance, where the residual variances are considered as a general parametric function of predictors. The motivation is based on considering the chi-square distribution for the calculated sample variance of the data. It is shown that outliers that are influential in nonlinear regression parameter estimates are not necessarily influential in calculating the sample variance. This matter persuades us, not only to robustify the estimate of the parameters of the models for both the regression function and the variance, but also to replace the sample variance of the data by a robust scale estimate.  相似文献   

19.
Post marketing data offer rich information and cost-effective resources for physicians and policy-makers to address some critical scientific questions in clinical practice. However, the complex confounding structures (e.g., nonlinear and nonadditive interactions) embedded in these observational data often pose major analytical challenges for proper analysis to draw valid conclusions. Furthermore, often made available as electronic health records (EHRs), these data are usually massive with hundreds of thousands observational records, which introduce additional computational challenges. In this paper, for comparative effectiveness analysis, we propose a statistically robust yet computationally efficient propensity score (PS) approach to adjust for the complex confounding structures. Specifically, we propose a kernel-based machine learning method for flexibly and robustly PS modeling to obtain valid PS estimation from observational data with complex confounding structures. The estimated propensity score is then used in the second stage analysis to obtain the consistent average treatment effect estimate. An empirical variance estimator based on the bootstrap is adopted. A split-and-merge algorithm is further developed to reduce the computational workload of the proposed method for big data, and to obtain a valid variance estimator of the average treatment effect estimate as a by-product. As shown by extensive numerical studies and an application to postoperative pain EHR data comparative effectiveness analysis, the proposed approach consistently outperforms other competing methods, demonstrating its practical utility.  相似文献   

20.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号