首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, the two-way error component regression model is considered. For the nonhomogenous linear hypothesis testing of regression coefficients, a parametric bootstrap (PB) approach is proposed. Simulation results indicate that the PB test, regardless of the sample sizes, maintains the Type I error rates very well and outperforms the existing generalized variable test, which may far exceed the intended significance level when the sample sizes are small or moderate. Real data examples illustrate the proposed approach work quite satisfactorily.  相似文献   

2.
To study the equality of regression coefficients in several heteroscedastic regression models, we propose a fiducial-based test, and theoretically examine the frequency property of the proposed test. We numerically compare the performance of the proposed approach with the parametric bootstrap (PB) approach. Simulation results indicate that the fiducial approach controls the Type I error rates satisfactorily regardless of the number of regression models and sample sizes, whereas the PB approach tends to be a little of liberal in some scenarios. Finally, the proposed approach is applied to an analysis of a real dataset for illustration.  相似文献   

3.
This article presents parametric bootstrap (PB) approaches for hypothesis testing and interval estimation for the regression coefficients and the variance components of panel data regression models with complete panels. The PB pivot variables are proposed based on sufficient statistics of the parameters. On the other hand, we also derive generalized inferences and improved generalized inferences for variance components in this article. Some simulation results are presented to compare the performance of the PB approaches with the generalized inferences. Our studies show that the PB approaches perform satisfactorily for various sample sizes and parameter configurations, and the performance of PB approaches is mostly the same as that of generalized inferences with respect to the expected lengths and powers. The PB inferences have almost exact coverage probabilities and Type I error rates. Furthermore, the PB procedure can be simply carried out by a few simulation steps, and the derivation is easier to understand and to be extended to the incomplete panels. Finally, the proposed approaches are illustrated by using a real data example.  相似文献   

4.
Selection of the important variables is one of the most important model selection problems in statistical applications. In this article, we address variable selection in finite mixture of generalized semiparametric models. To overcome computational burden, we introduce a class of variable selection procedures for finite mixture of generalized semiparametric models using penalized approach for variable selection. Estimation of nonparametric component will be done via multivariate kernel regression. It is shown that the new method is consistent for variable selection and the performance of proposed method will be assessed via simulation.  相似文献   

5.
We propose quantile regression (QR) in the Bayesian framework for a class of nonlinear mixed effects models with a known, parametric model form for longitudinal data. Estimation of the regression quantiles is based on a likelihood-based approach using the asymmetric Laplace density. Posterior computations are carried out via Gibbs sampling and the adaptive rejection Metropolis algorithm. To assess the performance of the Bayesian QR estimator, we compare it with the mean regression estimator using real and simulated data. Results show that the Bayesian QR estimator provides a fuller examination of the shape of the conditional distribution of the response variable. Our approach is proposed for parametric nonlinear mixed effects models, and therefore may not be generalized to models without a given model form.  相似文献   

6.
The inverse Gaussian distribution provides a flexible model for analyzing positive, right-skewed data. The generalized variable test for equality of several inverse Gaussian means with unknown and arbitrary variances has satisfactory Type-I error rate when the number of samples (k) is small (Tian, 2006). However, the Type-I error rate tends to be inflated when k goes up. In this article, we propose a parametric bootstrap (PB) approach for this problem. Simulation results show that the proposed test performs very satisfactorily regardless of the number of samples and sample sizes. This method is illustrated by an example.  相似文献   

7.
A method based on pseudo-observations has been proposed for direct regression modeling of functionals of interest with right-censored data, including the survival function, the restricted mean and the cumulative incidence function in competing risks. The models, once the pseudo-observations have been computed, can be fitted using standard generalized estimating equation software. Regression models can however yield problematic results if the number of covariates is large in relation to the number of events observed. Guidelines of events per variable are often used in practice. These rules of thumb for the number of events per variable have primarily been established based on simulation studies for the logistic regression model and Cox regression model. In this paper we conduct a simulation study to examine the small sample behavior of the pseudo-observation method to estimate risk differences and relative risks for right-censored data. We investigate how coverage probabilities and relative bias of the pseudo-observation estimator interact with sample size, number of variables and average number of events per variable.  相似文献   

8.
There are several procedures for fitting generalized additive models, i.e. regression models for an exponential family response where the influence of each single covariates is assumed to have unknown, potentially non-linear shape. Simulated data are used to compare a smoothing parameter optimization approach for selection of smoothness and of covariates, a stepwise approach, a mixed model approach, and a procedure based on boosting techniques. In particular it is investigated how the performance of procedures is linked to amount of information, type of response, total number of covariates, number of influential covariates, and extent of non-linearity. Measures for comparison are prediction performance, identification of influential covariates, and smoothness of fitted functions. One result is that the mixed model approach returns sparse fits with frequently over-smoothed functions, while the functions are less smooth for the boosting approach and variable selection is less strict. The other approaches are in between with respect to these measures. The boosting procedure is seen to perform very well when little information is available and/or when a large number of covariates is to be investigated. It is somewhat surprising that in scenarios with low information the fitting of a linear model, even with stepwise variable selection, has not much advantage over the fitting of an additive model when the true underlying structure is linear. In cases with more information the prediction performance of all procedures is very similar. So, in difficult data situations the boosting approach can be recommended, in others the procedures can be chosen conditional on the aim of the analysis.  相似文献   

9.
Thin plate regression splines   总被引:2,自引:0,他引:2  
Summary. I discuss the production of low rank smoothers for d  ≥ 1 dimensional data, which can be fitted by regression or penalized regression methods. The smoothers are constructed by a simple transformation and truncation of the basis that arises from the solution of the thin plate spline smoothing problem and are optimal in the sense that the truncation is designed to result in the minimum possible perturbation of the thin plate spline smoothing problem given the dimension of the basis used to construct the smoother. By making use of Lanczos iteration the basis change and truncation are computationally efficient. The smoothers allow the use of approximate thin plate spline models with large data sets, avoid the problems that are associated with 'knot placement' that usually complicate modelling with regression splines or penalized regression splines, provide a sensible way of modelling interaction terms in generalized additive models, provide low rank approximations to generalized smoothing spline models, appropriate for use with large data sets, provide a means for incorporating smooth functions of more than one variable into non-linear models and improve the computational efficiency of penalized likelihood models incorporating thin plate splines. Given that the approach produces spline-like models with a sparse basis, it also provides a natural way of incorporating unpenalized spline-like terms in linear and generalized linear models, and these can be treated just like any other model terms from the point of view of model selection, inference and diagnostics.  相似文献   

10.
Semiparametric regression models with multiple covariates are commonly encountered. When there are covariates not associated with response variable, variable selection may lead to sparser models, more lucid interpretations and more accurate estimation. In this study, we adopt a sieve approach for the estimation of nonparametric covariate effects in semiparametric regression models. We adopt a two-step iterated penalization approach for variable selection. In the first step, a mixture of the Lasso and group Lasso penalties are employed to conduct the first-round variable selection and obtain the initial estimate. In the second step, a mixture of the weighted Lasso and weighted group Lasso penalties, with weights constructed using the initial estimate, are employed for variable selection. We show that the proposed iterated approach has the variable selection consistency property, even when number of unknown parameters diverges with sample size. Numerical studies, including simulation and analysis of a diabetes dataset, show satisfactory performance of the proposed approach.  相似文献   

11.
Results from classical linear regression regarding the effects of covariate adjustment, with respect to the issues of confounding, the precision with which an exposure effect can be estimated, and the efficiency of hypothesis tests for no treatment effect in randomized experiments, are often assumed to apply more generally to other types of regression models. In this paper results pertaining to several generalized linear models involving a dichotomous response variable are given, demonstrating that with respect to the issues of confounding and precision, for models having a linear or log link function the results of classical linear regression do generally apply, whereas for other models, including those having a logit, probit, log-log, complementary log-log, or generalized logistic link function, the results of classical linear regression do not always apply. It is also shown, however, that for any link function, covariate adjustment results in improved efficiency of hypothesis tests for no treatment effect in randomized experiments, and hence that the classical linear regression results regarding efficiency do apply for all models having a dichotomous response variable.  相似文献   

12.
In real‐data analysis, deciding the best subset of variables in regression models is an important problem. Akaike's information criterion (AIC) is often used in order to select variables in many fields. When the sample size is not so large, the AIC has a non‐negligible bias that will detrimentally affect variable selection. The present paper considers a bias correction of AIC for selecting variables in the generalized linear model (GLM). The GLM can express a number of statistical models by changing the distribution and the link function, such as the normal linear regression model, the logistic regression model, and the probit model, which are currently commonly used in a number of applied fields. In the present study, we obtain a simple expression for a bias‐corrected AIC (corrected AIC, or CAIC) in GLMs. Furthermore, we provide an ‘R’ code based on our formula. A numerical study reveals that the CAIC has better performance than the AIC for variable selection.  相似文献   

13.
In this article, we consider the three-factor unbalanced nested design model without the assumption of equal error variance. For the problem of testing “main effects” of the three factors, we propose a parametric bootstrap (PB) approach and compare it with the existing generalized F (GF) test. The Type I error rates of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test performs better than the generalized F-test. The PB test performs very satisfactorily even for small samples while the GF test exhibits poor Type I error properties when the number of factorial combinations or treatments goes up. It is also noted that the same tests can be used to test the significance of the random effect variance component in a three-factor mixed effects nested model under unequal error variances.  相似文献   

14.
Abstract. The Dantzig selector (DS) is a recent approach of estimation in high‐dimensional linear regression models with a large number of explanatory variables and a relatively small number of observations. As in the least absolute shrinkage and selection operator (LASSO), this approach sets certain regression coefficients exactly to zero, thus performing variable selection. However, such a framework, contrary to the LASSO, has never been used in regression models for survival data with censoring. A key motivation of this article is to study the estimation problem for Cox's proportional hazards (PH) function regression models using a framework that extends the theory, the computational advantages and the optimal asymptotic rate properties of the DS to the class of Cox's PH under appropriate sparsity scenarios. We perform a detailed simulation study to compare our approach with other methods and illustrate it on a well‐known microarray gene expression data set for predicting survival from gene expressions.  相似文献   

15.
A generalized cumulative damage approach is presented which yields a large family of accelerated test inverse Gaussian-type models for strength of materials that incorporate the size effect as the acceleration variable. Previous models are generalized here in three aspects: the cumulative damage model is more general and can include damage functions other than the additive and multiplicative damages; the strength reduction function due to initial damage existing in the material is taken to be a very general function; and the initial damage process is a more general stochastic process that includes those previously assumed as special cases. The approach taken here is therefore the most general cumulative damage model obtained to date and yields a large number of potentially more useful accelerated test models for material strength. Estimation of model parameters by maximum likelihood methods is discussed, and two examples using real tensile strength data for carbon micro-composites and single carbon fibers are presented, illustrating the improvement of the new approach over previous models.  相似文献   

16.
An elicitation method is proposed for quantifying subjective opinion about the regression coefficients of a generalized linear model. Opinion between a continuous predictor variable and the dependent variable is modelled by a piecewise-linear function, giving a flexible model that can represent a wide variety of opinion. To quantify his or her opinions, the expert uses an interactive computer program, performing assessment tasks that involve drawing graphs and bar-charts to specify medians and other quantiles. Opinion about the regression coefficients is represented by a multivariate normal distribution whose parameters are determined from the assessments. It is practical to use the procedure with models containing a large number of parameters. This is illustrated through practical examples and the benefit from using prior knowledge is examined through cross-validation.  相似文献   

17.
Penalized methods for variable selection such as the Smoothly Clipped Absolute Deviation penalty have been increasingly applied to aid variable section in regression analysis. Much of the literature has focused on parametric models, while a few recent studies have shifted the focus and developed their applications for the popular semi-parametric, or distribution-free, generalized estimating equations (GEEs) and weighted GEE (WGEE). However, although the WGEE is composed of one main and one missing-data module, available methods only focus on the main module, with no variable selection for the missing-data module. In this paper, we develop a new approach to further extend the existing methods to enable variable selection for both modules. The approach is illustrated by both real and simulated study data.  相似文献   

18.
This paper is concerned with selection of explanatory variables in generalized linear models (GLM). The class of GLM's is quite large and contains e.g. the ordinary linear regression, the binary logistic regression, the probit model and Poisson regression with linear or log-linear parameter structure. We show that, through an approximation of the log likelihood and a certain data transformation, the variable selection problem in a GLM can be converted into variable selection in an ordinary (unweighted) linear regression model. As a consequence no specific computer software for variable selection in GLM's is needed. Instead, some suitable variable selection program for linear regression can be used. We also present a simulation study which shows that the log likelihood approximation is very good in many practical situations. Finally, we mention briefly possible extensions to regression models outside the class of GLM's.  相似文献   

19.
The problem of heavy tail in regression models is studied. It is proposed that regression models are estimated by a standard procedure and a statistical check for heavy tail using residuals is conducted as a tool for regression diagnostic. Using the peaks-over-threshold approach, the generalized Pareto distribution quantifies the degree of heavy tail by the extreme value index. The number of excesses is determined by means of an innovative threshold model which partitions the random sample into extreme values and ordinary values. The overall decision on a significant heavy tail is justified by both a statistical test and a quantile–quantile plot. The usefulness of the approach includes justification of goodness of fit of the estimated regression model and quantification of the occurrence of extremal events. The proposed methodology is supplemented by surface ozone level in the city center of Leeds.  相似文献   

20.
This article presents parametric bootstrap (PB) approaches for hypothesis testing and interval estimation for the regression coefficients of panel data regression models with incomplete panels. Some simulation results are presented to compare the performance of the PB approaches with the approximate inferences. Our studies show that the PB approaches perform satisfactorily for various sample sizes and parameter configurations, and the performance of PB approaches is mostly better than the approximate methods with respect to the coverage probabilities and the Type I error rates. The PB inferences have almost exact coverage probabilities and Type I error rates. Furthermore, the PB procedure can be simply carried out by a few simulation steps, and the derivation is easier to understand and to be extended to the multi-way error component regression models with unbalanced panels. Finally, the proposed approaches are illustrated by using a real data example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号