首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a simple method for evaluating the model that has been chosen by an adaptive regression procedure, our main focus being the lasso. This procedure deletes each chosen predictor and refits the lasso to get a set of models that are “close” to the chosen “base model,” and compares the error rates of the base model with that of nearby models. If the deletion of a predictor leads to significant deterioration in the model's predictive power, the predictor is called indispensable; otherwise, the nearby model is called acceptable and can serve as a good alternative to the base model. This provides both an assessment of the predictive contribution of each variable and a set of alternative models that may be used in place of the chosen model. We call this procedure “Next-Door analysis” since it examines models “next” to the base model. It can be applied to supervised learning problems with 1 penalization and stepwise procedures. We have implemented it in the R language as a library to accompany the well-known glmnet library. The Canadian Journal of Statistics 48: 447–470; 2020 © 2020 Statistical Society of Canada  相似文献   

2.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

3.
A cluster methodology, motivated by a robust similarity matrix is proposed for identifying likely multivariate outlier structure and to estimate weighted least-square (WLS) regression parameters in linear models. The proposed method is an agglomeration of procedures that begins from clustering the n-observations through a test of ‘no-outlier hypothesis’ (TONH) to a weighted least-square regression estimation. The cluster phase partition the n-observations into h-set called main cluster and a minor cluster of size n?h. A robust distance emerge from the main cluster upon which a test of no outlier hypothesis’ is conducted. An initial WLS regression estimation is computed from the robust distance obtained from the main cluster. Until convergence, a re-weighted least-squares (RLS) regression estimate is updated with weights based on the normalized residuals. The proposed procedure blends an agglomerative hierarchical cluster analysis of a complete linkage through the TONH to the Re-weighted regression estimation phase. Hence, we propose to call it cluster-based re-weighted regression (CBRR). The CBRR is compared with three existing procedures using two data sets known to exhibit masking and swamping. The performance of CBRR is further examined through simulation experiment. The results obtained from the data set illustration and the Monte Carlo study shows that the CBRR is effective in detecting multivariate outliers where other methods are susceptible to it. The CBRR does not require enormous computation and is substantially not susceptible to masking and swamping.  相似文献   

4.
Zhuqing Yu 《Statistics》2017,51(2):277-293
It has been found, under a smooth function model setting, that the n out of n bootstrap is inconsistent at stationary points of the smooth function, but that the m out of n bootstrap is consistent, provided that a correct convergence rate is specified of the plug-in smooth function estimator. By considering a more general moving-parameter framework, we show that neither of the above bootstrap methods is consistent uniformly over neighbourhoods of stationary points, so that anomalies often arise of coverages of bootstrap sets over certain subsets of parameter values. We propose a recentred bootstrap procedure for constructing confidence sets with uniformly correct coverages over compact sets containing stationary points. A weighted bootstrap procedure is also proposed as an alternative under more general circumstances. Unlike the m out of n bootstrap, both procedures do not require knowledge of the convergence rate of the smooth function estimator. Empirical performance of our procedures is illustrated with numerical examples.  相似文献   

5.
We propose a new stochastic approximation (SA) algorithm for maximum-likelihood estimation (MLE) in the incomplete-data setting. This algorithm is most useful for problems when the EM algorithm is not possible due to an intractable E-step or M-step. Compared to other algorithm that have been proposed for intractable EM problems, such as the MCEM algorithm of Wei and Tanner (1990), our proposed algorithm appears more generally applicable and efficient. The approach we adopt is inspired by the Robbins-Monro (1951) stochastic approximation procedure, and we show that the proposed algorithm can be used to solve some of the long-standing problems in computing an MLE with incomplete data. We prove that in general O(n) simulation steps are required in computing the MLE with the SA algorithm and O(n log n) simulation steps are required in computing the MLE using the MCEM and/or the MCNR algorithm, where n is the sample size of the observations. Examples include computing the MLE in the nonlinear error-in-variable model and nonlinear regression model with random effects.  相似文献   

6.
For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative hypothesis requires complex analytic approximations, and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p≤20. On the other hand, assuming that the data dimension p as well as the number q of regression variables are fixed while the sample size n grows, several asymptotic approximations are proposed in the literature for Wilk's Λ including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension p and a large sample size n. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null hypothesis and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large p and large n context, but also for moderately large data dimensions such as p=30 or p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in multivariate analysis of variance which is valid for high-dimensional data.  相似文献   

7.
In this article we present a simple procedure to test for the null hypothesis of equality of two regression curves versus one-sided alternatives in a general nonparametric and heteroscedastic setup. The test is based on the comparison of the sample averages of the estimated residuals in each regression model under the null hypothesis. The test statistic has asymptotic normal distribution and can detect any local alternative of rate n-1/2. Some simulations and an application to a data set are included.  相似文献   

8.
In this article, we propose a kernel-based estimator for the finite-dimensional parameter of a partially additive linear quantile regression model. For dependent processes that are strictly stationary and absolutely regular, we establish a precise convergent rate and show that the estimator is root-n consistent and asymptotically normal. To help facilitate inferential procedures, a consistent estimator for the asymptotic variance is also provided. In addition to conducting a simulation experiment to evaluate the finite sample performance of the estimator, an application to US inflation is presented. We use the real-data example to motivate how partially additive linear quantile models can offer an alternative modeling option for time-series data.  相似文献   

9.
Partially linear regression models are semiparametric models that contain both linear and nonlinear components. They are extensively used in many scientific fields for their flexibility and convenient interpretability. In such analyses, testing the significance of the regression coefficients in the linear component is typically a key focus. Under the high-dimensional setting, i.e., “large p, small n,” the conventional F-test strategy does not apply because the coefficients need to be estimated through regularization techniques. In this article, we develop a new test using a U-statistic of order two, relying on a pseudo-estimate of the nonlinear component from the classical kernel method. Using the martingale central limit theorem, we prove the asymptotic normality of the proposed test statistic under some regularity conditions. We further demonstrate our proposed test's finite-sample performance by simulation studies and by analyzing some breast cancer gene expression data.  相似文献   

10.
Nonlinear mixed-effects (NLME) models are flexible enough to handle repeated-measures data from various disciplines. In this article, we propose both maximum-likelihood and restricted maximum-likelihood estimations of NLME models using first-order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE-EM algorithm implemented in the ForStat procedure SNLME is compared with the Lindstrom and Bates (LB) algorithm implemented in both the SAS macro NLINMIX and the S-Plus/R function nlme in terms of computational efficiency and statistical properties. Two realworld data sets an orange tree data set and a Chinese fir (Cunninghamia lanceolata) data set, and a simulated data set were used for evaluation. FOCE-EM converged for all mixed models derived from the base model in the two realworld cases, while LB did not, especially for the models in which random effects are simultaneously considered in several parameters to account for between-subject variation. However, both algorithms had identical estimated parameters and fit statistics for the converged models. We therefore recommend using FOCE-EM in NLME models, particularly when convergence is a concern in model selection.  相似文献   

11.
Abstract

In this paper we introduce continuous tree mixture model that is the mixture of undirected graphical models with tree structured graphs and is considered as multivariate analysis with a non parametric approach. We estimate its parameters, the component edge sets and mixture proportions through regularized maximum likalihood procedure. Our new algorithm, which uses expectation maximization algorithm and the modified version of Kruskal algorithm, simultaneosly estimates and prunes the mixture component trees. Simulation studies indicate this method performs better than the alternative Gaussian graphical mixture model. The proposed method is also applied to water-level data set and is compared with the results of Gaussian mixture model.  相似文献   

12.
This article considers a simple test for the correct specification of linear spatial autoregressive models, assuming that the choice of the weight matrix Wn is true. We derive the limiting distributions of the test under the null hypothesis of correct specification and a sequence of local alternatives. We show that the test is free of nuisance parameters asymptotically under the null and prove the consistency of our test. To improve the finite sample performance of our test, we also propose a residual-based wild bootstrap and justify its asymptotic validity. We conduct a small set of Monte Carlo simulations to investigate the finite sample properties of our tests. Finally, we apply the test to two empirical datasets: the vote cast and the economic growth rate. We reject the linear spatial autoregressive model in the vote cast example but fail to reject it in the economic growth rate example. Supplementary materials for this article are available online.  相似文献   

13.
Typical panel data models make use of the assumption that the regression parameters are the same for each individual cross-sectional unit. We propose tests for slope heterogeneity in panel data models. Our tests are based on the conditional Gaussian likelihood function in order to avoid the incidental parameters problem induced by the inclusion of individual fixed effects for each cross-sectional unit. We derive the Conditional Lagrange Multiplier test that is valid in cases where N → ∞ and T is fixed. The test applies to both balanced and unbalanced panels. We expand the test to account for general heteroskedasticity where each cross-sectional unit has its own form of heteroskedasticity. The modification is possible if T is large enough to estimate regression coefficients for each cross-sectional unit by using the MINQUE unbiased estimator for regression variances under heteroskedasticity. All versions of the test have a standard Normal distribution under general assumptions on the error distribution as N → ∞. A Monte Carlo experiment shows that the test has very good size properties under all specifications considered, including heteroskedastic errors. In addition, power of our test is very good relative to existing tests, particularly when T is not large.  相似文献   

14.
In this paper, we investigate the problem of testing semiparametric hypotheses in locally stationary processes. The proposed method is based on an empirical version of the L2‐distance between the true time varying spectral density and its best approximation under the null hypothesis. As this approach only requires estimation of integrals of the time varying spectral density and its square, we do not have to choose a smoothing bandwidth for the local estimation of the spectral density – in contrast to most other procedures discussed in the literature. Asymptotic normality of the test statistic is derived both under the null hypothesis and the alternative. We also propose a bootstrap procedure to obtain critical values in the case of small sample sizes. Additionally, we investigate the finite sample properties of the new method and compare it with the currently available procedures by means of a simulation study. Finally, we illustrate the performance of the new test in two data examples, one regarding log returns of the S&P 500 and the other a well‐known series of weekly egg prices.  相似文献   

15.
Nonlinear mixed-effects models are very useful to analyze repeated measures data and are used in a variety of applications. Normal distributions for random effects and residual errors are usually assumed, but such assumptions make inferences vulnerable to the presence of outliers. In this work, we introduce an extension of a normal nonlinear mixed-effects model considering a subclass of elliptical contoured distributions for both random effects and residual errors. This elliptical subclass, the scale mixtures of normal (SMN) distributions, includes heavy-tailed multivariate distributions, such as Student-t, the contaminated normal and slash, among others, and represents an interesting alternative to outliers accommodation maintaining the elegance and simplicity of the maximum likelihood theory. We propose an exact estimation procedure to obtain the maximum likelihood estimates of the fixed-effects and variance components, using a stochastic approximation of the EM algorithm. We compare the performance of the normal and the SMN models with two real data sets.  相似文献   

16.
Mixture of linear mixed-effects models has received considerable attention in longitudinal studies, including medical research, social science and economics. The inferential question of interest is often the identification of critical factors that affect the responses. We consider a Bayesian approach to select the important fixed and random effects in the finite mixture of linear mixed-effects models. To accomplish our goal, latent variables are introduced to facilitate the identification of influential fixed and random components and to classify the membership of observations in the longitudinal data. A spike-and-slab prior for the regression coefficients is adopted to sidestep the potential complications of highly collinear covariates and to handle large p and small n issues in the variable selection problems. Here we employ Markov chain Monte Carlo (MCMC) sampling techniques for posterior inferences and explore the performance of the proposed method in simulation studies, followed by an actual psychiatric data analysis concerning depressive disorder.  相似文献   

17.
We propose the Laplace Error Penalty (LEP) function for variable selection in high‐dimensional regression. Unlike penalty functions using piecewise splines construction, the LEP is constructed as an exponential function with two tuning parameters and is infinitely differentiable everywhere except at the origin. With this construction, the LEP‐based procedure acquires extra flexibility in variable selection, admits a unified derivative formula in optimization and is able to approximate the L0 penalty as close as possible. We show that the LEP procedure can identify relevant predictors in exponentially high‐dimensional regression with normal errors. We also establish the oracle property for the LEP estimator. Although not being convex, the LEP yields a convex penalized least squares function under mild conditions if p is no greater than n. A coordinate descent majorization‐minimization algorithm is introduced to implement the LEP procedure. In simulations and a real data analysis, the LEP methodology performs favorably among competitive procedures.  相似文献   

18.
In this paper, we suggest a simple test and an easily applicable modeling procedure for threshold moving average (TMA) models. Firstly, based on the fitted residuals by maximum likelihood estimate (MLE) for MA models, we construct a simple statistic, which is obtained by linear arrange regression and follows F-distribution approximately, to test for threshold nonlinearity and specify the threshold variables. And then, we use some scatterplots to identify the number and locations of the potential thresholds. Finally, with the statistic and Akaike information criterion, we propose the procedure to build TMA models. Both the power of test statistic and the convenience of modeling procedure can work very well demonstrated by simulation experiments and the application to a real example.  相似文献   

19.
In this paper, we propose a method for testing absolutely regular and possibly nonstationary nonlinear time-series, with application to general AR-ARCH models. Our test statistic is based on a marked empirical process of residuals which is shown to converge to a Gaussian process with respect to the Skohorod topology. This testing procedure was first introduced by Stute [Nonparametric model checks for regression, Ann. Statist. 25 (1997), pp. 613–641] and then widely developed by Ngatchou-Wandji [Weak convergence of some marked empirical processes: Application to testing heteroscedasticity, J. Nonparametr. Stat. 14 (2002), pp. 325–339; Checking nonlinear heteroscedastic time series models, J. Statist. Plann. Inference 133 (2005), pp. 33–68; Local power of a Cramer-von Mises type test for parametric autoregressive models of order one, Compt. Math. Appl. 56(4) (2008), pp. 918–929] under more general conditions. Applications to general AR-ARCH models are given.  相似文献   

20.
Coefficient estimation in linear regression models with missing data is routinely carried out in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy‐tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an ICQ ‐type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号