首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 163 毫秒
1.
Abstract. Although generalized cross‐validation (GCV) has been frequently applied to select bandwidth when kernel methods are used to estimate non‐parametric mixed‐effect models in which non‐parametric mean functions are used to model covariate effects, and additive random effects are applied to account for overdispersion and correlation, the optimality of the GCV has not yet been explored. In this article, we construct a kernel estimator of the non‐parametric mean function. An equivalence between the kernel estimator and a weighted least square type estimator is provided, and the optimality of the GCV‐based bandwidth is investigated. The theoretical derivations also show that kernel‐based and spline‐based GCV give very similar asymptotic results. This provides us with a solid base to use kernel estimation for mixed‐effect models. Simulation studies are undertaken to investigate the empirical performance of the GCV. A real data example is analysed for illustration.  相似文献   

2.
The purpose of this paper is threefold. First, we obtain the asymptotic properties of the modified model selection criteria proposed by Hurvich et al. (1990. Improved estimators of Kullback-Leibler information for autoregressive model selection in small samples. Biometrika 77, 709–719) for autoregressive models. Second, we provide some highlights on the better performance of this modified criteria. Third, we extend the modification introduced by these authors to model selection criteria commonly used in the class of self-exciting threshold autoregressive (SETAR) time series models. We show the improvements of the modified criteria in their finite sample performance. In particular, for small and medium sample size the frequency of selecting the true model improves for the consistent criteria and the root mean square error (RMSE) of prediction improves for the efficient criteria. These results are illustrated via simulation with SETAR models in which we assume that the threshold and the parameters are unknown.  相似文献   

3.
This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small‐sample and large‐sample properties of both two‐sided and one‐sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score‐based methods on the whole have the best two‐sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand‐calculated method, the Brown‐Li ‘Jeffreys’ method appears to perform reasonably well, and in most situations, it has better one‐sided coverage than the widely recommended alternatives. In general, we find that the one‐sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non‐inferiority test, the type I error rate (which is equal to the one‐sided non‐coverage probability) can be inflated. The only exception is the Gart‐Nam ‘skewness‐corrected’ method, which we express using modified notation in order to include a bias correction for improved small‐sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart‐Nam interval. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
The generalized cross-validation (GCV) method has been a popular technique for the selection of tuning parameters for smoothing and penalty, and has been a standard tool to select tuning parameters for shrinkage models in recent works. Its computational ease and robustness compared to the cross-validation method makes it competitive for model selection as well. It is well known that the GCV method performs well for linear estimators, which are linear functions of the response variable, such as ridge estimator. However, it may not perform well for nonlinear estimators since the GCV emphasizes linear characteristics by taking the trace of the projection matrix. This paper aims to explore the GCV for nonlinear estimators and to further extend the results to correlated data in longitudinal studies. We expect that the nonlinear GCV and quasi-GCV developed in this paper will provide similar tools for the selection of tuning parameters in linear penalty models and penalized GEE models.  相似文献   

5.
Abstract. We consider model‐based prediction of a finite population total when a monotone transformation of the survey variable makes it appropriate to assume additive, homoscedastic errors. As the transformation to achieve this does not necessarily simultaneously produce an easily parameterized mean function, we assume only that the mean is a smooth function of the auxiliary variable and estimate it non‐parametrically. The back transformation of predictions obtained on the transformed scale introduces bias which we remove using smearing. We obtain an asymptotic expansion for the prediction error which shows that prediction bias is asymptotically negligible and the prediction mean‐squared error (MSE) using a non‐parametric model remains in the same order as when a parametric model is adopted. The expansion also shows the effect of smearing on the prediction MSE and can be used to compute the asymptotic prediction MSE. We propose a model‐based bootstrap estimate of the prediction MSE. The predictor produces competitive results in terms of bias and prediction MSE in a simulation study, and performs well on a population constructed from an Australian farm survey.  相似文献   

6.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   

7.
Nested error linear regression models using survey weights have been studied in small area estimation to obtain efficient model‐based and design‐consistent estimators of small area means. The covariates in these nested error linear regression models are not subject to measurement errors. In practical applications, however, there are many situations in which the covariates are subject to measurement errors. In this paper, we develop a nested error linear regression model with an area‐level covariate subject to functional measurement error. In particular, we propose a pseudo‐empirical Bayes (PEB) predictor to estimate small area means. This predictor borrows strength across areas through the model and makes use of the survey weights to preserve the design consistency as the area sample size increases. We also employ a jackknife method to estimate the mean squared prediction error (MSPE) of the PEB predictor. Finally, we report the results of a simulation study on the performance of our PEB predictor and associated jackknife MSPE estimator.  相似文献   

8.
The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non‐negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non‐negative data. We explore exponential smoothing state space models for non‐negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non‐Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short‐term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one‐step‐ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed.  相似文献   

9.
Typically, an optimal smoothing parameter in a penalized spline regression is determined by minimizing an information criterion, such as one of the C p , CV and GCV criteria. Since an explicit solution to the minimization problem for an information criterion cannot be obtained, it is necessary to carry out an iterative procedure to search for the optimal smoothing parameter. In order to avoid such extra calculation, a non-iterative optimization method for smoothness in penalized spline regression is proposed using the formulation of generalized ridge regression. By conducting numerical simulations, we verify that our method has better performance than other methods which optimize the number of basis functions and the single smoothing parameter by means of the CV or GCV criteria.  相似文献   

10.
王小燕等 《统计研究》2014,31(9):107-112
变量选择是统计建模的重要环节,选择合适的变量可以建立结构简单、预测精准的稳健模型。本文在logistic回归下提出了新的双层变量选择惩罚方法——adaptive Sparse Group Lasso(adSGL),其独特之处在于基于变量的分组结构作筛选,实现了组内和组间双层选择。该方法的优点是对各单个系数和组系数采取不同程度的惩罚,避免了过度惩罚大系数,从而提高了模型的估计和预测精度。求解的难点是惩罚似然函数不是严格凸的,因此本文基于组坐标下降法求解模型,并建立了调整参数的选取准则。模拟分析表明,对比现有代表性方法Sparse Group Lasso、Group Lasso及Lasso,adSGL法不仅提高了双层选择精度,而且降低了模型误差。最后本文将adSGL法应用到信用卡信用评分研究,对比logistic回归,它具有更高的分类精度和稳健性。  相似文献   

11.
Abstract. It is quite common in epidemiology that we wish to assess the quality of estimators on a particular set of information, whereas the estimators may use a larger set of information. Two examples are studied: the first occurs when we construct a model for an event which happens if a continuous variable is above a certain threshold. We can compare estimators based on the observation of only the event or on the whole continuous variable. The other example is that of predicting the survival based only on survival information or using in addition information on a disease. We develop modified Akaike information criterion (AIC) and Likelihood cross‐validation (LCV) criteria to compare estimators in this non‐standard situation. We show that a normalized difference of AIC has a bias equal to o ( n ? 1 ) if the estimators are based on well‐specified models; a normalized difference of LCV always has a bias equal to o ( n ? 1 ). A simulation study shows that both criteria work well, although the normalized difference of LCV tends to be better and is more robust. Moreover in the case of well‐specified models the difference of risks boils down to the difference of statistical risks which can be rather precisely estimated. For ‘compatible’ models the difference of risks is often the main term but there can also be a difference of mis‐specification risks.  相似文献   

12.
We consider non‐parametric estimation for interarrival times density of a renewal process. For continuous time observation, a projection estimator in the orthonormal Laguerre basis is built. Nonstandard decompositions lead to bounds on the mean integrated squared error (MISE), from which rates of convergence on Sobolev–Laguerre spaces are deduced, when the length of the observation interval gets large. The more realistic setting of discrete time observation is more difficult to handle. A first strategy consists in neglecting the discretization error. A more precise strategy aims at taking into account the convolution structure of the data. Under a simplifying ‘dead‐zone’ condition, the corresponding MISE is given for any sampling step. In the three cases, an automatic model selection procedure is described and gives the best MISE, up to a logarithmic term. The results are illustrated through a simulation study.  相似文献   

13.
Non‐parametric estimation and bootstrap techniques play an important role in many areas of Statistics. In the point process context, kernel intensity estimation has been limited to exploratory analysis because of its inconsistency, and some consistent alternatives have been proposed. Furthermore, most authors have considered kernel intensity estimators with scalar bandwidths, which can be very restrictive. This work focuses on a consistent kernel intensity estimator with unconstrained bandwidth matrix. We propose a smooth bootstrap for inhomogeneous spatial point processes. The consistency of the bootstrap mean integrated squared error (MISE) as an estimator of the MISE of the consistent kernel intensity estimator proves the validity of the resampling procedure. Finally, we propose a plug‐in bandwidth selection procedure based on the bootstrap MISE and compare its performance with several methods currently used through both as a simulation study and an application to the spatial pattern of wildfires registered in Galicia (Spain) during 2006.  相似文献   

14.
There has been significant new work published recently on the subject of model selection. Notably Rissanen (1986, 1987, 1988) has introduced new criteria based on the notion of stochastic complexity and Hurvich and Tsai(1989) have introduced a bias corrected version of Akaike's information criterion. In this paper, a Monte Carlo study is conducted to evaluate the relative performance of these new model selection criteria against the commonly used alternatives. In addition, we compare the performance of all the criteria in a number of situations not considered in earlier studies: robustness to distributional assumptions, collinearity among regressors, and non-stationarity in a time series. The evaluation is based on the number of times the correct model is chosen and the out of sample prediction error. The results of this study suggest that Rissanen's criteria are sensitive to the assumptions and choices that need to made in their application, and so are sometimes unreliable. While many of the criteria often perform satisfactorily, across experiments the Schwartz Bayesian Information Criterion (and the related Bayesian Estimation Criterion of Geweke-Meese) seem to consistently outperfom the other alternatives considered.  相似文献   

15.
There has been significant new work published recently on the subject of model selection. Notably Rissanen (1986, 1987, 1988) has introduced new criteria based on the notion of stochastic complexity and Hurvich and Tsai(1989) have introduced a bias corrected version of Akaike's information criterion. In this paper, a Monte Carlo study is conducted to evaluate the relative performance of these new model selection criteria against the commonly used alternatives. In addition, we compare the performance of all the criteria in a number of situations not considered in earlier studies: robustness to distributional assumptions, collinearity among regressors, and non-stationarity in a time series. The evaluation is based on the number of times the correct model is chosen and the out of sample prediction error. The results of this study suggest that Rissanen's criteria are sensitive to the assumptions and choices that need to made in their application, and so are sometimes unreliable. While many of the criteria often perform satisfactorily, across experiments the Schwartz Bayesian Information Criterion (and the related Bayesian Estimation Criterion of Geweke-Meese) seem to consistently outperfom the other alternatives considered.  相似文献   

16.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

17.
In an online prediction context, the authors introduce a new class of mongrel criteria that allow for the weighing of candidate models and the combination of their predictions based both on model‐based and empirical measures of their performance. They present simulation results which show that model averaging using the mongrel‐derived weights leads, in small samples, to predictions that are more accurate than that obtained by Bayesian weight updating, provided that none of the candidate models is too distant from the data generator.  相似文献   

18.
In this paper, we propose a robust estimation procedure for a class of non‐linear regression models when the covariates are contaminated with Laplace measurement error, aiming at constructing an estimation procedure for the regression parameters which are less affected by the possible outliers, and heavy‐tailed underlying distribution, as well as reducing the bias introduced by the measurement error. Starting with the modal regression procedure developed for the measurement error‐free case, a non‐trivial modification is made so that the modified version can effectively correct the potential bias caused by measurement error. Large sample properties of the proposed estimate, such as the convergence rate and the asymptotic normality, are thoroughly investigated. A simulation study and real data application are conducted to illustrate the satisfying finite sample performance of the proposed estimation procedure.  相似文献   

19.
In statistical analysis, one of the most important subjects is to select relevant exploratory variables that perfectly explain the dependent variable. Variable selection methods are usually performed within regression analysis. Variable selection is implemented so as to minimize the information criteria (IC) in regression models. Information criteria directly affect the power of prediction and the estimation of selected models. There are numerous information criteria in literature such as Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC). These criteria are modified for to improve the performance of the selected models. BIC is extended with alternative modifications towards the usage of prior and information matrix. Information matrix-based BIC (IBIC) and scaled unit information prior BIC (SPBIC) are efficient criteria for this modification. In this article, we proposed a combination to perform variable selection via differential evolution (DE) algorithm for minimizing IBIC and SPBIC in linear regression analysis. We concluded that these alternative criteria are very useful for variable selection. We also illustrated the efficiency of this combination with various simulation and application studies.  相似文献   

20.
Random effects model can account for the lack of fitting a regression model and increase precision of estimating area‐level means. However, in case that the synthetic mean provides accurate estimates, the prior distribution may inflate an estimation error. Thus, it is desirable to consider the uncertain prior distribution, which is expressed as the mixture of a one‐point distribution and a proper prior distribution. In this paper, we develop an empirical Bayes approach for estimating area‐level means, using the uncertain prior distribution in the context of a natural exponential family, which we call the empirical uncertain Bayes (EUB) method. The regression model considered in this paper includes the Poisson‐gamma and the binomial‐beta, and the normal‐normal (Fay–Herriot) model, which are typically used in small area estimation. We obtain the estimators of hyperparameters based on the marginal likelihood by using a well‐known expectation‐maximization algorithm and propose the EUB estimators of area means. For risk evaluation of the EUB estimator, we derive a second‐order unbiased estimator of a conditional mean squared error by using some techniques of numerical calculation. Through simulation studies and real data applications, we evaluate a performance of the EUB estimator and compare it with the usual empirical Bayes estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号