排序方式: 共有137条查询结果,搜索用时 46 毫秒
101.
《Journal of Statistical Computation and Simulation》2012,82(2-4):333-341
A convenient recursive computational method for repeated measures analysis, provided by McGilchrist and Cullis (1990), has been extended by the authors to heterogeneous error structures and also to the repeated measures model with random coefficients. The approach is outlined briefly in this paper. A computing program for the approach has been written and used to obtain results for simulated data having various error structures. A summary of the results is given. The computing program together with some subroutines is available from the authors. 相似文献
102.
《Journal of Statistical Computation and Simulation》2012,82(2-4):263-279
We construct bootstrap confidence intervals for smoothing spline estimates based on Gaussian data, and penalized likelihood smoothing spline estimates based on data from .exponential families. Several vari- ations of bootstrap confidence intervals are considered and compared. We find that the commonly used ootstrap percentile intervals are inferior to the T intervals and to intervals based on bootstrap estimation of mean squared errors. The best variations of the bootstrap confidence intervals behave similar to the well known Bayesian confidence intervals. These bootstrap confidence intervals have an average coverage probability across the function being estimated, as opposed to a pointwise property. 相似文献
103.
Kingshuk Roy Choudhury Catharine Pettigrew 《Journal of statistical planning and inference》2012,142(1):12-24
Mismatch negativity (MMN) is a neurophysiological tool that can be used to investigate various facets of comprehension. Subjects are presented with different stimuli to elicit the MMN response, which is derived from electroencephalography (EEG) signals recorded at electrodes across the brain. We propose a methodology to extend single electrode analyses of MMN data by generating smooth scalp maps of estimated experimental effects. It is shown that penalized least squares estimates of effect maps can be produced using a two step procedure involving (a) ANOVA at each electrode and (b) spatial smoothing across electrodes. A Fisher von-Mises kernel is used for smoothing scalp maps with cross-validated bandwidth selection. The methodology is applied to a case control study involving aphasics (language disordered individuals). Analysis of residuals shows possible heteroscedasticity and non-Gaussian tail behavior. For robust inference, a semiparametric multivariate approach is proposed to determine the significance of parametric maps. A variety of global and regional test statistics are developed to investigate the significance of spatial patterns in treatment effects. The methodology is seen to confirm previous findings from single electrode analysis and identifies some new significant spatial patterns of difference between controls and aphasics. 相似文献
104.
Won Son Jong Soo Lee Kyeong Eun Lee Johan Lim 《Journal of the Korean Statistical Society》2018,47(4):482-490
In this paper, we propose a new iterative sparse algorithm (ISA) to compute the maximum likelihood estimator (MLE) or penalized MLE of the mixed effects model. The sparse approximation based on the arrow-head (A-H) matrix is one solution which is popularly used in practice. The A-H method provides an easy computation of the inverse of the Hessian matrix and is computationally efficient. However, it often has non-negligible error in approximating the inverse of the Hessian matrix and in the estimation. Unlike the A-H method, in the ISA, the sparse approximation is applied “iteratively” to reduce the approximation error at each Newton Raphson step. The advantages of the ISA over the exact and A-H method are illustrated using several synthetic and real examples. 相似文献
105.
Although the t-type estimator is a kind of M-estimator with scale optimization, it has some advantages over the M-estimator. In this article, we first propose a t-type joint generalized linear model as a robust extension to the classical joint generalized linear models for modeling data containing extreme or outlying observations. Next, we develop a t-type pseudo-likelihood (TPL) approach, which can be viewed as a robust version to the existing pseudo-likelihood (PL) approach. To determine which variables significantly affect the variance of the response variable, we then propose a unified penalized maximum TPL method to simultaneously select significant variables for the mean and dispersion models in t-type joint generalized linear models. Thus, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the mean and dispersion models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies are conducted to illustrate the proposed methods. 相似文献
106.
This paper studies penalized quantile regression for dynamic panel data with fixed effects, where the penalty involves l1 shrinkage of the fixed effects. Using extensive Monte Carlo simulations, we present evidence that the penalty term reduces the dynamic panel bias and increases the efficiency of the estimators. The underlying intuition is that there is no need to use instrumental variables for the lagged dependent variable in the dynamic panel data model without fixed effects. This provides an additional use for the shrinkage models, other than model selection and efficiency gains. We propose a Bayesian information criterion based estimator for the parameter that controls the degree of shrinkage. We illustrate the usefulness of the novel econometric technique by estimating a “target leverage” model that includes a speed of capital structure adjustment. Using the proposed penalized quantile regression model the estimates of the adjustment speeds lie between 3% and 44% across the quantiles, showing strong evidence that there is substantial heterogeneity in the speed of adjustment among firms. 相似文献
107.
X. Lin & D. Zhang 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1999,61(2):381-400
Generalized additive mixed models are proposed for overdispersed and correlated data, which arise frequently in studies involving clustered, hierarchical and spatial designs. This class of models allows flexible functional dependence of an outcome variable on covariates by using nonparametric regression, while accounting for correlation between observations by using random effects. We estimate nonparametric functions by using smoothing splines and jointly estimate smoothing parameters and variance components by using marginal quasi-likelihood. Because numerical integration is often required by maximizing the objective functions, double penalized quasi-likelihood is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the method proposed is that it allows us to make systematic inference on all model components within a unified parametric mixed model framework and can be easily implemented by fitting a working generalized linear mixed model by using existing statistical software. A bias correction procedure is also proposed to improve the performance of double penalized quasi-likelihood for sparse data. We illustrate the method with an application to infectious disease data and we evaluate its performance through simulation. 相似文献
108.
109.
Patrizio Frederic 《Journal of statistical planning and inference》2011,141(8):2878-2890
We propose a new flexible procedure for modeling skew-symmetric (SS) distributions via B-splines. To avoid over-fitting we follow a penalized likelihood estimation method. The structure of “B-splines SS with penalties” provides a flexible and smooth semi-parametric setting allowing estimates that capture many features of the target function such as asymmetry and multimodality. After outlining some theoretical results, we propose an effective computational strategy. Finally, we present some empirical results on both simulated and real data from chemical processing and banknote forgery data. 相似文献
110.
High dimensional models are getting much attention from diverse research fields involving very many parameters with a moderate size of data. Model selection is an important issue in such a high dimensional data analysis. Recent literature on theoretical understanding of high dimensional models covers a wide range of penalized methods including LASSO and SCAD. This paper presents a systematic overview of the recent development in high dimensional statistical models. We provide a brief review on the recent development of theory, methods, and guideline on applications of several penalized methods. The review includes appropriate settings to be implemented and limitations along with potential solution for each of the reviewed method. In particular, we provide a systematic review of statistical theory of the high dimensional methods by considering a unified high-dimensional modeling framework together with high level conditions. This framework includes (generalized) linear regression and quantile regression as its special cases. We hope our review helps researchers in this field to have a better understanding of the area and provides useful information to future study. 相似文献