首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
The varying coefficient model (VCM) is an important generalization of the linear regression model and many existing estimation procedures for VCM were built on L 2 loss, which is popular for its mathematical beauty but is not robust to non-normal errors and outliers. In this paper, we address the problem of both robustness and efficiency of estimation and variable selection procedure based on the convex combined loss of L 1 and L 2 instead of only quadratic loss for VCM. By using local linear modeling method, the asymptotic normality of estimation is driven and a useful selection method is proposed for the weight of composite L 1 and L 2. Then the variable selection procedure is given by combining local kernel smoothing with adaptive group LASSO. With appropriate selection of tuning parameters by Bayesian information criterion (BIC) the theoretical properties of the new procedure, including consistency in variable selection and the oracle property in estimation, are established. The finite sample performance of the new method is investigated through simulation studies and the analysis of body fat data. Numerical studies show that the new method is better than or at least as well as the least square-based method in terms of both robustness and efficiency for variable selection.  相似文献   

2.
In the paper we consider minimisation of U-statistics with the weighted Lasso penalty and investigate their asymptotic properties in model selection and estimation. We prove that the use of appropriate weights in the penalty leads to the procedure that behaves like the oracle that knows the true model in advance, i.e. it is model selection consistent and estimates nonzero parameters with the standard rate. For the unweighted Lasso penalty, we obtain sufficient and necessary conditions for model selection consistency of estimators. The obtained results strongly based on the convexity of the loss function that is the main assumption of the paper. Our theorems can be applied to the ranking problem as well as generalised regression models. Thus, using U-statistics we can study more complex models (better describing real problems) than usually investigated linear or generalised linear models.  相似文献   

3.
The main focus of our paper is to compare the performance of different model selection criteria used for multivariate reduced rank time series. We consider one of the most commonly used reduced rank model, that is, the reduced rank vector autoregression (RRVAR (p, r)) introduced by Velu et al. [Reduced rank models for multiple time series. Biometrika. 1986;7(31):105–118]. In our study, the most popular model selection criteria are included. The criteria are divided into two groups, that is, simultaneous selection and two-step selection criteria, accordingly. Methods from the former group select both an autoregressive order p and a rank r simultaneously, while in the case of two-step criteria, first an optimal order p is chosen (using model selection criteria intended for the unrestricted VAR model) and then an optimal rank r of coefficient matrices is selected (e.g. by means of sequential testing). Considered model selection criteria include well-known information criteria (such as Akaike information criterion, Schwarz criterion, Hannan–Quinn criterion, etc.) as well as widely used sequential tests (e.g. the Bartlett test) and the bootstrap method. An extensive simulation study is carried out in order to investigate the efficiency of all model selection criteria included in our study. The analysis takes into account 34 methods, including 6 simultaneous methods and 28 two-step approaches, accordingly. In order to carefully analyse how different factors affect performance of model selection criteria, we consider over 150 simulation settings. In particular, we investigate the influence of the following factors: time series dimension, different covariance structure, different level of correlation among components and different level of noise (variance). Moreover, we analyse the prediction accuracy concerned with the application of the RRVAR model and compare it with results obtained for the unrestricted vector autoregression. In this paper, we also present a real data application of model selection criteria for the RRVAR model using the Polish macroeconomic time series data observed in the period 1997–2007.  相似文献   

4.
In this study, we consider the problem of selecting explanatory variables of fixed effects in linear mixed models under covariate shift, which is when the values of covariates in the model for prediction differ from those in the model for observed data. We construct a variable selection criterion based on the conditional Akaike information introduced by Vaida & Blanchard (2005). We focus especially on covariate shift in small area estimation and demonstrate the usefulness of the proposed criterion. In addition, numerical performance is investigated through simulations, one of which is a design‐based simulation using a real dataset of land prices. The Canadian Journal of Statistics 46: 316–335; 2018 © 2018 Statistical Society of Canada  相似文献   

5.
Copula models for multivariate lifetimes have become widely used in areas such as biomedicine, finance and insurance. This paper fills some gaps in existing methodology for copula parameters and model assessment. We consider procedures based on likelihood and pseudolikelihood ratio statistics and introduce semiparametric maximum likelihood estimation leading to semiparametric versions. For cases where standard asymptotic approximations do not hold, we propose an efficient simulation technique for obtaining p-values. We apply these methods to tests for a copula model, based on embedding it in a larger copula family. It is shown that the likelihood and pseudolikelihood ratio tests are consistent even when the expanded copula model is misspecified. Power comparisons with two other tests of fit indicate that model expansion provides a convenient, powerful and robust approach. The methods are illustrated on an application concerning the time to loss of vision in the two eyes of an individual.  相似文献   

6.
In this article, we consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors, in the presence of outliers. Since the LASSO is a special case of the penalized least-square regression with L1 penalty function, it suffers from the heavy-tailed errors and/or outliers in data. Recently, Least Absolute Deviation (LAD) and the LASSO methods have been combined (the LAD-LASSO regression method) to carry out robust parameter estimation and variable selection simultaneously for a multiple linear regression model. However, variable selection of the functional predictors based on LASSO fails since multiple parameters exist for a functional predictor. Therefore, group LASSO is used for selecting functional predictors since group LASSO selects grouped variables rather than individual variables. In this study, we propose a robust functional predictor selection method, the LAD-group LASSO, for a functional linear regression model with a scalar response and functional predictors. We illustrate the performance of the LAD-group LASSO on both simulated and real data.  相似文献   

7.
Hailin Sang 《Statistics》2015,49(1):187-208
We propose a sparse coefficient estimation and automated model selection procedure for autoregressive processes with heavy-tailed innovations based on penalized conditional maximum likelihood. Under mild moment conditions on the innovation processes, the penalized conditional maximum likelihood estimator satisfies a strong consistency, OP(N?1/2) consistency, and the oracle properties, where N is the sample size. We have the freedom in choosing penalty functions based on the weak conditions on them. Two penalty functions, least absolute shrinkage and selection operator and smoothly clipped average deviation, are compared. The proposed method provides a distribution-based penalized inference to AR models, which is especially useful when the other estimation methods fail or under perform for AR processes with heavy-tailed innovations [Feigin, Resnick. Pitfalls of fitting autoregressive models for heavy-tailed time series. Extremes. 1999;1:391–422]. A simulation study confirms our theoretical results. At the end, we apply our method to a historical price data of the US Industrial Production Index for consumer goods, and obtain very promising results.  相似文献   

8.
We consider the estimation problem of the probability P=P(X>Y) for the standard Topp–Leone distribution. After discussing the maximum likelihood and uniformly minimum variance unbiased estimation procedures for the problem on both complete and left censored samples, we perform a Monte Carlo simulation to compare the estimators based on the mean square error criteria. We also consider the interval estimation of P.  相似文献   

9.
ABSTRACT

In this paper, we investigate the consistency of the Expectation Maximization (EM) algorithm-based information criteria for model selection with missing data. The criteria correspond to a penalization of the conditional expectation of the complete data log-likelihood given the observed data and with respect to the missing data conditional density. We present asymptotic properties related to maximum likelihood estimation in the presence of incomplete data and we provide sufficient conditions for the consistency of model selection by minimizing the information criteria. Their finite sample performance is illustrated through simulation and real data studies.  相似文献   

10.
The objective of this paper is to investigate through simulation the possible presence of the incidental parameters problem when performing frequentist model discrimination with stratified data. In this context, model discrimination amounts to considering a structural parameter taking values in a finite space, with k points, k≥2. This setting seems to have not yet been considered in the literature about the Neyman–Scott phenomenon. Here we provide Monte Carlo evidence of the severity of the incidental parameters problem also in the model discrimination setting and propose a remedy for a special class of models. In particular, we focus on models that are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with the best frequentist selection procedure based on maximization of the marginal likelihood induced by the maximal invariant, or of its Laplace approximation. Results of two Monte Carlo experiments indicate that when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero, unlike what happens for model discrimination based on exact or approximate marginal likelihoods. Finally, two examples with real data sets are given.  相似文献   

11.
In this paper, we consider the shrinkage and penalty estimation procedures in the linear regression model with autoregressive errors of order p when it is conjectured that some of the regression parameters are inactive. We develop the statistical properties of the shrinkage estimation method including asymptotic distributional biases and risks. We show that the shrinkage estimators have a significantly higher relative efficiency than the classical estimator. Furthermore, we consider the two penalty estimators: least absolute shrinkage and selection operator (LASSO) and adaptive LASSO estimators, and numerically compare their relative performance with that of the shrinkage estimators. A Monte Carlo simulation experiment is conducted for different combinations of inactive predictors and the performance of each estimator is evaluated in terms of the simulated mean-squared error. This study shows that the shrinkage estimators are comparable to the penalty estimators when the number of inactive predictors in the model is relatively large. The shrinkage and penalty methods are applied to a real data set to illustrate the usefulness of the procedures in practice.  相似文献   

12.
In the analysis of correlated ordered data, mixed-effect models are frequently used to control the subject heterogeneity effects. A common assumption in fitting these models is the normality of random effects. In many cases, this is unrealistic, making the estimation results unreliable. This paper considers several flexible models for random effects and investigates their properties in the model fitting. We adopt a proportional odds logistic regression model and incorporate the skewed version of the normal, Student's t and slash distributions for the effects. Stochastic representations for various flexible distributions are proposed afterwards based on the mixing strategy approach. This reduces the computational burden being performed by the McMC technique. Furthermore, this paper addresses the identifiability restrictions and suggests a procedure to handle this issue. We analyze a real data set taken from an ophthalmic clinical trial. Model selection is performed by suitable Bayesian model selection criteria.  相似文献   

13.
Abstract

In this article, we study the variable selection and estimation for linear regression models with missing covariates. The proposed estimation method is almost as efficient as the popular least-squares-based estimation method for normal random errors and empirically shown to be much more efficient and robust with respect to heavy tailed errors or outliers in the responses and covariates. To achieve sparsity, a variable selection procedure based on SCAD is proposed to conduct estimation and variable selection simultaneously. The procedure is shown to possess the oracle property. To deal with the covariates missing, we consider the inverse probability weighted estimators for the linear model when the selection probability is known or unknown. It is shown that the estimator by using estimated selection probability has a smaller asymptotic variance than that with true selection probability, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for penalized rank estimator with the covariates missing in the linear model. Some numerical examples are provided to demonstrate the performance of the estimators.  相似文献   

14.
Model-based clustering typically involves the development of a family of mixture models and the imposition of these models upon data. The best member of the family is then chosen using some criterion and the associated parameter estimates lead to predicted group memberships, or clusterings. This paper describes the extension of the mixtures of multivariate t-factor analyzers model to include constraints on the degrees of freedom, the factor loadings, and the error variance matrices. The result is a family of six mixture models, including parsimonious models. Parameter estimates for this family of models are derived using an alternating expectation-conditional maximization algorithm and convergence is determined based on Aitken’s acceleration. Model selection is carried out using the Bayesian information criterion (BIC) and the integrated completed likelihood (ICL). This novel family of mixture models is then applied to simulated and real data where clustering performance meets or exceeds that of established model-based clustering methods. The simulation studies include a comparison of the BIC and the ICL as model selection techniques for this novel family of models. Application to simulated data with larger dimensionality is also explored.  相似文献   

15.
In this article, we consider the order estimation of autoregressive models with incomplete data using the expectation–maximization (EM) algorithm-based information criteria. The criteria take the form of a penalization of the conditional expectation of the log-likelihood. The evaluation of the penalization term generally involves numerical differentiation and matrix inversion. We introduce a simplification of the penalization term for autoregressive model selection and we propose a penalty factor based on a resampling procedure in the criteria formula. The simulation results show the improvements yielded by the proposed method when compared with the classical information criteria for model selection with incomplete data.  相似文献   

16.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

17.
A novel family of mixture models is introduced based on modified t-factor analyzers. Modified factor analyzers were recently introduced within the Gaussian context and our work presents a more flexible and robust alternative. We introduce a family of mixtures of modified t-factor analyzers that uses this generalized version of the factor analysis covariance structure. We apply this family within three paradigms: model-based clustering; model-based classification; and model-based discriminant analysis. In addition, we apply the recently published Gaussian analogue to this family under the model-based classification and discriminant analysis paradigms for the first time. Parameter estimation is carried out within the alternating expectation-conditional maximization framework and the Bayesian information criterion is used for model selection. Two real data sets are used to compare our approach to other popular model-based approaches; in these comparisons, the chosen mixtures of modified t-factor analyzers model performs favourably. We conclude with a summary and suggestions for future work.  相似文献   

18.
ABSTRACT

Inflated data are prevalent in many situations and a variety of inflated models with extensions have been derived to fit data with excessive counts of some particular responses. The family of information criteria (IC) has been used to compare the fit of models for selection purposes. Yet despite the common use in statistical applications, there are not too many studies evaluating the performance of IC in inflated models. In this study, we studied the performance of IC for data with dual-inflated data. The new zero- and K-inflated Poisson (ZKIP) regression model and conventional inflated models including Poisson regression and zero-inflated Poisson (ZIP) regression were fitted for dual-inflated data and the performance of IC were compared. The effect of sample sizes and the proportions of inflated observations towards selection performance were also examined. The results suggest that the Bayesian information criterion (BIC) and consistent Akaike information criterion (CAIC) are more accurate than the Akaike information criterion (AIC) in terms of model selection when the true model is simple (i.e. Poisson regression (POI)). For more complex models, such as ZIP and ZKIP, the AIC was consistently better than the BIC and CAIC, although it did not reach high levels of accuracy when sample size and the proportion of zero observations were small. The AIC tended to over-fit the data for the POI, whereas the BIC and CAIC tended to under-parameterize the data for ZIP and ZKIP. Therefore, it is desirable to study other model selection criteria for dual-inflated data with small sample size.  相似文献   

19.
Focusing on the model selection problems in the family of Poisson mixture models (including the Poisson mixture regression model with random effects and zero‐inflated Poisson regression model with random effects), the current paper derives two conditional Akaike information criteria. The criteria are the unbiased estimators of the conditional Akaike information based on the conditional log‐likelihood and the conditional Akaike information based on the joint log‐likelihood, respectively. The derivation is free from the specific parametric assumptions about the conditional mean of the true data‐generating model and applies to different types of estimation methods. Additionally, the derivation is not based on the asymptotic argument. Simulations show that the proposed criteria have promising estimation accuracy. In addition, it is found that the criterion based on the conditional log‐likelihood demonstrates good model selection performance under different scenarios. Two sets of real data are used to illustrate the proposed method.  相似文献   

20.
Predictive criteria, including the adjusted squared multiple correlation coefficient, the adjusted concordance correlation coefficient, and the predictive error sum of squares, are available for model selection in the linear mixed model. These criteria all involve some sort of comparison of observed values and predicted values, adjusted for the complexity of the model. The predicted values can be conditional on the random effects or marginal, i.e., based on averages over the random effects. These criteria have not been investigated for model selection success.

We used simulations to investigate selection success rates for several versions of these predictive criteria as well as several versions of Akaike's information criterion and the Bayesian information criterion, and the pseudo F-test. The simulations involved the simple scenario of selection of a fixed parameter when the covariance structure is known.

Several variance–covariance structures were used. For compound symmetry structures, higher success rates for the predictive criteria were obtained when marginal rather than conditional predicted values were used. Information criteria had higher success rates when a certain term (normally left out in SAS MIXED computations) was included in the criteria. Various penalty functions were used in the information criteria, but these had little effect on success rates. The pseudo F-test performed as expected. For the autoregressive with random effects structure, the results were the same except that success rates were higher for the conditional version of the predictive error sum of squares.

Characteristics of the data, such as the covariance structure, parameter values, and sample size, greatly impacted performance of various model selection criteria. No one criterion was consistently better than the others.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号