首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
A regression model with skew-normal errors provides a useful extension for ordinary normal regression models when the data set under consideration involves asymmetric outcomes. Variable selection is an important issue in all regression analyses, and in this paper, we investigate the simultaneously variable selection in joint location and scale models of the skew-normal distribution. We propose a unified penalized likelihood method which can simultaneously select significant variables in the location and scale models. Furthermore, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the location and scale models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies and a real example are used to illustrate the proposed methodologies.  相似文献   

2.
We consider exact and approximate Bayesian computation in the presence of latent variables or missing data. Specifically we explore the application of a posterior predictive distribution formula derived in Sweeting And Kharroubi (2003), which is a particular form of Laplace approximation, both as an importance function and a proposal distribution. We show that this formula provides a stable importance function for use within poor man’s data augmentation schemes and that it can also be used as a proposal distribution within a Metropolis-Hastings algorithm for models that are not analytically tractable. We illustrate both uses in the case of a censored regression model and a normal hierarchical model, with both normal and Student t distributed random effects. Although the predictive distribution formula is motivated by regular asymptotic theory, it is not necessary that the likelihood has a closed form or that it possesses a local maximum.  相似文献   

3.
Extending previous work on hedge fund return predictability, this paper introduces the idea of modelling the conditional distribution of hedge fund returns using Student's t full-factor multivariate GARCH models. This class of models takes into account the stylized facts of hedge fund return series, that is, heteroskedasticity, fat tails and deviations from normality. For the proposed class of multivariate predictive regression models, we derive analytic expressions for the score and the Hessian matrix, which can be used within classical and Bayesian inferential procedures to estimate the model parameters, as well as to compare different predictive regression models. We propose a Bayesian approach to model comparison which provides posterior probabilities for various predictive models that can be used for model averaging. Our empirical application indicates that accounting for fat tails and time-varying covariances/correlations provides a more appropriate modelling approach of the underlying dynamics of financial series and improves our ability to predict hedge fund returns.  相似文献   

4.
Abstract

Variable selection in finite mixture of regression (FMR) models is frequently used in statistical modeling. The majority of applications of variable selection in FMR models use a normal distribution for regression error. Such assumptions are unsuitable for a set of data containing a group or groups of observations with heavy tails and outliers. In this paper, we introduce a robust variable selection procedure for FMR models using the t distribution. With appropriate selection of the tuning parameters, the consistency and the oracle property of the regularized estimators are established. To estimate the parameters of the model, we develop an EM algorithm for numerical computations and a method for selecting tuning parameters adaptively. The parameter estimation performance of the proposed model is evaluated through simulation studies. The application of the proposed model is illustrated by analyzing a real data set.  相似文献   

5.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

6.
Copulas are powerful explanatory tools for studying dependence patterns in multivariate data. While the primary use of copula models is in multivariate dependence modelling, they also offer predictive value for regression analysis. This article investigates the utility of copula models for model‐based predictions from two angles. We assess whether, where, and by how much various copula models differ in their predictions of a conditional mean and conditional quantiles. From a model selection perspective, we then evaluate the predictive discrepancy between copula models using in‐sample and out‐of‐sample predictions both in bivariate and higher‐dimensional settings. Our findings suggest that some copula models are more difficult to distinguish in terms of their overall predictive power than others, and depending on the quantity of interest, the differences in predictions can be detected only in some targeted regions. The situations where copula‐based regression approaches would be advantageous over traditional ones are discussed using simulated and real data. The Canadian Journal of Statistics 47: 8–26; 2019 © 2018 Statistical Society of Canada  相似文献   

7.
Combining-100 information from multiple samples is often needed in biomedical and economic studies, but differences between these samples must be appropriately taken into account in the analysis of the combined data. We study the estimation for moment restriction models with data combined from two samples under an ignorability-type assumption while allowing for different marginal distributions of variables common to both samples. Suppose that an outcome regression (OR) model and a propensity score (PS) model are specified. By leveraging semi-parametric efficiency theory, we derive an augmented inverse probability-weighted (AIPW) estimator that is locally efficient and doubly robust with respect to these models. Furthermore, we develop calibrated regression and likelihood estimators that are not only locally efficient and doubly robust but also intrinsically efficient in achieving smaller variances than the AIPW estimator when the PS model is correctly specified but the OR model may be mispecified. As an important application, we study the two-sample instrumental variable problem and derive the corresponding estimators while allowing for incompatible distributions of variables common to the two samples. Finally, we provide a simulation study and an econometric application on public housing projects to demonstrate the superior performance of our improved estimators. The Canadian Journal of Statistics 48: 259–284; 2020 © 2019 Statistical Society of Canada  相似文献   

8.
Quantile regression (QR) is a natural alternative for depicting the impact of covariates on the conditional distributions of a outcome variable instead of the mean. In this paper, we investigate Bayesian regularized QR for the linear models with autoregressive errors. LASSO-penalized type priors are forced on regression coefficients and autoregressive parameters of the model. Gibbs sampler algorithm is employed to draw the full posterior distributions of unknown parameters. Finally, the proposed procedures are illustrated by some simulation studies and applied to a real data analysis of the electricity consumption.  相似文献   

9.
Sinh-normal/independent distributions are a class of symmetric heavy-tailed distributions that include the sinh-normal distribution as a special case, which has been used extensively in Birnbaum–Saunders regression models. Here, we explore the use of Markov Chain Monte Carlo methods to develop a Bayesian analysis in nonlinear regression models when Sinh-normal/independent distributions are assumed for the random errors term, and it provides a robust alternative to the sinh-normal nonlinear regression model. Bayesian mechanisms for parameter estimation, residual analysis and influence diagnostics are then developed, which extend the results of Farias and Lemonte [Bayesian inference for the Birnbaum-Saunders nonlinear regression model, Stat. Methods Appl. 20 (2011), pp. 423-438] who used the Sinh-normal/independent distributions with known scale parameter. Some special cases, based on the sinh-Student-t (sinh-St), sinh-slash (sinh-SL) and sinh-contaminated normal (sinh-CN) distributions are discussed in detail. Two real datasets are finally analyzed to illustrate the developed procedures.  相似文献   

10.
Although the t-type estimator is a kind of M-estimator with scale optimization, it has some advantages over the M-estimator. In this article, we first propose a t-type joint generalized linear model as a robust extension to the classical joint generalized linear models for modeling data containing extreme or outlying observations. Next, we develop a t-type pseudo-likelihood (TPL) approach, which can be viewed as a robust version to the existing pseudo-likelihood (PL) approach. To determine which variables significantly affect the variance of the response variable, we then propose a unified penalized maximum TPL method to simultaneously select significant variables for the mean and dispersion models in t-type joint generalized linear models. Thus, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the mean and dispersion models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies are conducted to illustrate the proposed methods.  相似文献   

11.
A method of regularized discriminant analysis for discrete data, denoted DRDA, is proposed. This method is related to the regularized discriminant analysis conceived by Friedman (1989) in a Gaussian framework for continuous data. Here, we are concerned with discrete data and consider the classification problem using the multionomial distribution. DRDA has been conceived in the small-sample, high-dimensional setting. This method has a median position between multinomial discrimination, the first-order independence model and kernel discrimination. DRDA is characterized by two parameters, the values of which are calculated by minimizing a sample-based estimate of future misclassification risk by cross-validation. The first parameter is acomplexity parameter which provides class-conditional probabilities as a convex combination of those derived from the full multinomial model and the first-order independence model. The second parameter is asmoothing parameter associated with the discrete kernel of Aitchison and Aitken (1976). The optimal complexity parameter is calculated first, then, holding this parameter fixed, the optimal smoothing parameter is determined. A modified approach, in which the smoothing parameter is chosen first, is discussed. The efficiency of the method is examined with other classical methods through application to data.  相似文献   

12.
Children exposed to mixtures of endocrine disrupting compounds such as phthalates are at high risk of experiencing significant friction in their growth and sexual maturation. This article is primarily motivated by a study that aims to assess the toxicants‐modified effects of risk factors related to the hazards of early or delayed onset of puberty among children living in Mexico City. To address the hypothesis of potential nonlinear modification of covariate effects, we propose a new Cox regression model with multiple functional covariate‐environment interactions, which allows covariate effects to be altered nonlinearly by mixtures of exposed toxicants. This new class of models is rather flexible and includes many existing semiparametric Cox models as special cases. To achieve efficient estimation, we develop the global partial likelihood method of inference, in which we establish key large‐sample results, including estimation consistency, asymptotic normality, semiparametric efficiency and the generalized likelihood ratio test for both parameters and nonparametric functions. The proposed methodology is examined via simulation studies and applied to the analysis of the motivating data, where maternal exposures to phthalates during the third trimester of pregnancy are found to be important risk modifiers for the age of attaining the first stage of puberty. The Canadian Journal of Statistics 47: 204–221; 2019 © 2019 Statistical Society of Canada  相似文献   

13.
We consider estimating the mode of a response given an error‐prone covariate. It is shown that ignoring measurement error typically leads to inconsistent inference for the conditional mode of the response given the true covariate, as well as misleading inference for regression coefficients in the conditional mode model. To account for measurement error, we first employ the Monte Carlo corrected score method (Novick & Stefanski, 2002) to obtain an unbiased score function based on which the regression coefficients can be estimated consistently. To relax the normality assumption on measurement error this method requires, we propose another method where deconvoluting kernels are used to construct an objective function that is maximized to obtain consistent estimators of the regression coefficients. Besides rigorous investigation on asymptotic properties of the new estimators, we study their finite sample performance via extensive simulation experiments, and find that the proposed methods substantially outperform a naive inference method that ignores measurement error. The Canadian Journal of Statistics 47: 262–280; 2019 © 2019 Statistical Society of Canada  相似文献   

14.
ABSTRACT

In this paper, we develop an efficient wavelet-based regularized linear quantile regression framework for coefficient estimations, where the responses are scalars and the predictors include both scalars and function. The framework consists of two important parts: wavelet transformation and regularized linear quantile regression. Wavelet transform can be used to approximate functional data through representing it by finite wavelet coefficients and effectively capturing its local features. Quantile regression is robust for response outliers and heavy-tailed errors. In addition, comparing with other methods it provides a more complete picture of how responses change conditional on covariates. Meanwhile, regularization can remove small wavelet coefficients to achieve sparsity and efficiency. A novel algorithm, Alternating Direction Method of Multipliers (ADMM) is derived to solve the optimization problems. We conduct numerical studies to investigate the finite sample performance of our method and applied it on real data from ADHD studies.  相似文献   

15.
S. Huet 《Statistics》2015,49(2):239-266
We propose a procedure to test that the expectation of a Gaussian vector is linear against a nonparametric alternative. We consider the case where the covariance matrix of the observations has a block diagonal structure. This framework encompasses regression models with autocorrelated errors, heteroscedastic regression models, mixed-effects models and growth curves. Our procedure does not depend on any prior information about the alternative. We prove that the test is asymptotically of the nominal level and consistent. We characterize the set of vectors on which the test is powerful and prove the classical √log log (n)/n convergence rate over directional alternatives. We propose a bootstrap version of the test as an alternative to the initial one and provide a simulation study in order to evaluate both procedures for small sample sizes when the purpose is to test goodness of fit in a Gaussian mixed-effects model. Finally, we illustrate the procedures using a real data set.  相似文献   

16.
In this paper, we investigate robust parameter estimation and variable selection for binary regression models with grouped data. We investigate estimation procedures based on the minimum-distance approach. In particular, we employ minimum Hellinger and minimum symmetric chi-squared distances criteria and propose regularized minimum-distance estimators. These estimators appear to possess a certain degree of automatic robustness against model misspecification and/or for potential outliers. We show that the proposed non-penalized and penalized minimum-distance estimators are efficient under the model and simultaneously have excellent robustness properties. We study their asymptotic properties such as consistency, asymptotic normality and oracle properties. Using Monte Carlo studies, we examine the small-sample and robustness properties of the proposed estimators and compare them with traditional likelihood estimators. We also study two real-data applications to illustrate our methods. The numerical studies indicate the satisfactory finite-sample performance of our procedures.  相似文献   

17.
We derive an identity for nonparametric maximum likelihood estimators (NPMLE) and regularized MLEs in censored data models which expresses the standardized maximum likelihood estimator in terms of the standardized empirical process. This identity provides an effective starting point in proving both consistency and efficiency of NPMLE and regularized MLE. The identity and corresponding method for proving efficiency is illustrated for the NPMLE in the univariate right-censored data model, the regularized MLE in the current status data model and for an implicit NPMLE based on a mixture of right-censored and current status data. Furthermore, a general algorithm for estimation of the limiting variance of the NPMLE is provided. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

18.
A number of nonstationary models have been developed to estimate extreme events as function of covariates. A quantile regression (QR) model is a statistical approach intended to estimate and conduct inference about the conditional quantile functions. In this article, we focus on the simultaneous variable selection and parameter estimation through penalized quantile regression. We conducted a comparison of regularized Quantile Regression model with B-Splines in Bayesian framework. Regularization is based on penalty and aims to favor parsimonious model, especially in the case of large dimension space. The prior distributions related to the penalties are detailed. Five penalties (Lasso, Ridge, SCAD0, SCAD1 and SCAD2) are considered with their equivalent expressions in Bayesian framework. The regularized quantile estimates are then compared to the maximum likelihood estimates with respect to the sample size. A Markov Chain Monte Carlo (MCMC) algorithms are developed for each hierarchical model to simulate the conditional posterior distribution of the quantiles. Results indicate that the SCAD0 and Lasso have the best performance for quantile estimation according to Relative Mean Biais (RMB) and the Relative Mean-Error (RME) criteria, especially in the case of heavy distributed errors. A case study of the annual maximum precipitation at Charlo, Eastern Canada, with the Pacific North Atlantic climate index as covariate is presented.  相似文献   

19.
This paper addresses the problem of simultaneous variable selection and estimation in the random-intercepts model with the first-order lag response. This type of model is commonly used for analyzing longitudinal data obtained through repeated measurements on individuals over time. This model uses random effects to cover the intra-class correlation, and the first lagged response to address the serial correlation, which are two common sources of dependency in longitudinal data. We demonstrate that the conditional likelihood approach by ignoring correlation among random effects and initial responses can lead to biased regularized estimates. Furthermore, we demonstrate that joint modeling of initial responses and subsequent observations in the structure of dynamic random-intercepts models leads to both consistency and Oracle properties of regularized estimators. We present theoretical results in both low- and high-dimensional settings and evaluate regularized estimators' performances by conducting simulation studies and analyzing a real dataset. Supporting information is available online.  相似文献   

20.
In rational regression models, the G-optimal designs are very difficult to derive in general. Even when an G-optimal design can be found, it has, from the point of view of modern nonparametric regression, certain drawbacks because the optimal design crucially depends on the model. Hence, it can be used only when the model is given in advance. This leads to the problem of finding designs which would be nearly optimal for a broad class of rational regression models. In this article, we will show that the so-called continuous Chebyshev Design is a practical solution to this problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号