首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Empirical-likelihood based inference for the parameters in a generalized partially linear single-index models (GPLSIM) is investigated. Based on the local linear estimators of the nonparametric parts of the GPLSIM, an estimated empirical likelihood-based statistic of the parametric components is proposed. We show that the resulting statistic is asymptotically standard chi-squared distributed, the confidence regions for the parametric components are constructed. Some simulations are conducted to illustrate the proposed method.  相似文献   

2.
In this article, procedures are proposed to test the hypothesis of equality of two or more regression functions. Tests are proposed by p-values, first under homoscedastic regression model, which are derived using fiducial method based on cubic spline interpolation. Then, we construct a test in the heteroscedastic case based on Fisher's method of combining independent tests. We study the behaviors of the tests by simulation experiments, in which comparisons with other tests are also given. The proposed tests have good performances. Finally, an application to a data set are given to illustrate the usefulness of the proposed test in practice.  相似文献   

3.
Abstract.  We study a semiparametric generalized additive coefficient model (GACM), in which linear predictors in the conventional generalized linear models are generalized to unknown functions depending on certain covariates, and approximate the non-parametric functions by using polynomial spline. The asymptotic expansion with optimal rates of convergence for the estimators of the non-parametric part is established. Semiparametric generalized likelihood ratio test is also proposed to check if a non-parametric coefficient can be simplified as a parametric one. A conditional bootstrap version is suggested to approximate the distribution of the test under the null hypothesis. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed methods. We further apply the proposed model and methods to a data set from a human visceral Leishmaniasis study conducted in Brazil from 1994 to 1997. Numerical results outperform the traditional generalized linear model and the proposed GACM is preferable.  相似文献   

4.
Some asymptotic results on generalized penalized spline smoothing   总被引:2,自引:0,他引:2  
Summary.  The paper discusses asymptotic properties of penalized spline smoothing if the spline basis increases with the sample size. The proof is provided in a generalized smoothing model allowing for non-normal responses. The results are extended in two ways. First, assuming the spline coefficients to be a priori normally distributed links the smoothing framework to generalized linear mixed models. We consider the asymptotic rates such that the Laplace approximation is justified and the resulting fits in the mixed model correspond to penalized spline estimates. Secondly, we make use of a fully Bayesian viewpoint by imposing an a priori distribution on all parameters and coefficients. We argue that with the postulated rates at which the spline basis dimension increases with the sample size the posterior distribution of the spline coefficients is approximately normal. The validity of this result is investigated in finite samples by comparing Markov chain Monte Carlo results with their asymptotic approximation in a simulation study.  相似文献   

5.

Regression spline smoothing is a popular approach for conducting nonparametric regression. An important issue associated with it is the choice of a "theoretically best" set of knots. Different statistical model selection methods, such as Akaike's information criterion and generalized cross-validation, have been applied to derive different "theoretically best" sets of knots. Typically these best knot sets are defined implicitly as the optimizers of some objective functions. Hence another equally important issue concerning regression spline smoothing is how to optimize such objective functions. In this article different numerical algorithms that are designed for carrying out such optimization problems are compared by means of a simulation study. Both the univariate and bivariate smoothing settings will be considered. Based on the simulation results, recommendations for choosing a suitable optimization algorithm under various settings will be provided.  相似文献   

6.
Varying-coefficient models are useful extensions of classical linear models. They arise from multivariate nonparametric regression, nonlinear time series modeling and forecasting, longitudinal data analysis, and others. This article proposes the penalized spline estimation for the varying-coefficient models. Assuming a fixed but potentially large number of knots, the penalized spline estimators are shown to be strong consistency and asymptotic normality. A systematic optimization algorithm for the selection of multiple smoothing parameters is developed. One of the advantages of the penalized spline estimation is that it can accommodate varying degrees of smoothness among coefficient functions due to multiple smoothing parameters being used. Some simulation studies are presented to illustrate the proposed methods.  相似文献   

7.
ABSTRACT

This article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations.  相似文献   

8.
Selecting an appropriate structure for a linear mixed model serves as an appealing problem in a number of applications such as in the modelling of longitudinal or clustered data. In this paper, we propose a variable selection procedure for simultaneously selecting and estimating the fixed and random effects. More specifically, a profile log-likelihood function, along with an adaptive penalty, is utilized for sparse selection. The Newton-Raphson optimization algorithm is performed to complete the parameter estimation. By jointly selecting the fixed and random effects, the proposed approach increases selection accuracy compared with two-stage procedures, and the usage of the profile log-likelihood can improve computational efficiency in one-stage procedures. We prove that the proposed procedure enjoys the model selection consistency. A simulation study and a real data application are conducted for demonstrating the effectiveness of the proposed method.  相似文献   

9.
T max and C max are important pharmacokinetic parameters in drug development processes. Often a nonparametric procedure is needed to estimate them when model independence is required. This paper proposes a simulation-based optimal design procedure for finding optimal sampling times for nonparametric estimates of T max and C max for each subject, assuming that the drug concentration follows a non-linear mixed model. The main difficulty of using standard optimal design procedures is that the property of the nonparametric estimate is very complicated. This procedure uses a sample reuse simulation to calculate the design criterion, which is an integral of multiple dimension, so that effective optimization procedures such as Newton-type procedures can be used directly to find optimal designs. This procedure is used to construct optimal designs for an open one-compartment model. An approximation based on the Taylor expansion is also derived and showed results that were consistent with those based on the sample reuse simulation.  相似文献   

10.
In this paper, we consider improved estimating equations for semiparametric partial linear models (PLM) for longitudinal data, or clustered data in general. We approximate the non‐parametric function in the PLM by a regression spline, and utilize quadratic inference functions (QIF) in the estimating equations to achieve a more efficient estimation of the parametric part in the model, even when the correlation structure is misspecified. Moreover, we construct a test which is an analogue to the likelihood ratio inference function for inferring the parametric component in the model. The proposed methods perform well in simulation studies and real data analysis conducted in this paper.  相似文献   

11.
In many chemical data sets, the amount of radiation absorbed (absorbance) is related to the concentration of the element in the sample by Lambert–Beer's law. However, this relation changes abruptly when the variable concentration reaches an unknown threshold level, the so-called change point. In the context of analytical chemistry, there are many methods that describe the relationship between absorbance and concentration, but none of them provide inferential procedures to detect change points. In this paper, we propose partially linear models with a change point separating the parametric and nonparametric components. The Schwarz information criterion is used to locate a change point. A back-fitting algorithm is presented to obtain parameter estimates and the penalized Fisher information matrix is obtained to calculate the standard errors of the parameter estimates. To examine the proposed method, we present a simulation study. Finally, we apply the method to data sets from the chemistry area. The partially linear models with a change point developed in this paper are useful supplements to other methods of absorbance–concentration analysis in chemical studies, for example, and in many other practical applications.  相似文献   

12.
Existing research on mixtures of regression models are limited to directly observed predictors. The estimation of mixtures of regression for measurement error data imposes challenges for statisticians. For linear regression models with measurement error data, the naive ordinary least squares method, which directly substitutes the observed surrogates for the unobserved error-prone variables, yields an inconsistent estimate for the regression coefficients. The same inconsistency also happens to the naive mixtures of regression estimate, which is based on the traditional maximum likelihood estimator and simply ignores the measurement error. To solve this inconsistency, we propose to use the deconvolution method to estimate the mixture likelihood of the observed surrogates. Then our proposed estimate is found by maximizing the estimated mixture likelihood. In addition, a generalized EM algorithm is also developed to find the estimate. The simulation results demonstrate that the proposed estimation procedures work well and perform much better than the naive estimates.  相似文献   

13.
When constructing models to summarize clinical data to be used for simulations, it is good practice to evaluate the models for their capacity to reproduce the data. This can be done by means of Visual Predictive Checks (VPC), which consist of several reproductions of the original study by simulation from the model under evaluation, calculating estimates of interest for each simulated study and comparing the distribution of those estimates with the estimate from the original study. This procedure is a generic method that is straightforward to apply, in general. Here we consider the application of the method to time-to-event data and consider the special case when a time-varying covariate is not known or cannot be approximated after event time. In this case, simulations cannot be conducted beyond the end of the follow-up time (event or censoring time) in the original study. Thus, the simulations must be censored at the end of the follow-up time. Since this censoring is not random, the standard KM estimates from the simulated studies and the resulting VPC will be biased. We propose to use inverse probability of censoring weighting (IPoC) method to correct the KM estimator for the simulated studies and obtain unbiased VPCs. For analyzing the Cantos study, the IPoC weighting as described here proved valuable and enabled the generation of VPCs to qualify PKPD models for simulations. Here, we use a generated data set, which allows illustration of the different situations and evaluation against the known truth.  相似文献   

14.
Abstract.  We develop a variance reduction method for smoothing splines. For a given point of estimation, we define a variance-reduced spline estimate as a linear combination of classical spline estimates at three nearby points. We first develop a variance reduction method for spline estimators in univariate regression models. We then develop an analogous variance reduction method for spline estimators in clustered/longitudinal models. Simulation studies are performed which demonstrate the efficacy of our variance reduction methods in finite sample settings. Finally, a real data analysis with the motorcycle data set is performed. Here we consider variance estimation and generate 95% pointwise confidence intervals for the unknown regression function.  相似文献   

15.
Spatially-adaptive Penalties for Spline Fitting   总被引:2,自引:0,他引:2  
The paper studies spline fitting with a roughness penalty that adapts to spatial heterogeneity in the regression function. The estimates are p th degree piecewise polynomials with p − 1 continuous derivatives. A large and fixed number of knots is used and smoothing is achieved by putting a quadratic penalty on the jumps of the p th derivative at the knots. To be spatially adaptive, the logarithm of the penalty is itself a linear spline but with relatively few knots and with values at the knots chosen to minimize the generalized cross validation (GCV) criterion. This locally-adaptive spline estimator is compared with other spline estimators in the literature such as cubic smoothing splines and knot-selection techniques for least squares regression. Our estimator can be interpreted as an empirical Bayes estimate for a prior allowing spatial heterogeneity. In cases of spatially heterogeneous regression functions, empirical Bayes confidence intervals using this prior achieve better pointwise coverage probabilities than confidence intervals based on a global-penalty parameter. The method is developed first for univariate models and then extended to additive models.  相似文献   

16.
The purpose of this article is to discuss the application of nonlinear models to price decisions in the framework of rating-based product preference models. As revealed by a comparative simulation study, when a nonlinear model is the true model, the traditional linear model fails to properly describe the true pattern. It appears to be unsatisfactory in comparison with nonlinear models, such as logistic and natural spline, which offer some advantages, the most important being the ability to take into account more than just linear and/or monotonic effects. Consequently, when we model the product preference with a nonlinear model, we are potentially able to detect its ‘best’ price level, i.e., the price at which consumer preference towards a given attribute is at its maximum. From an application point of view, this approach is very flexible in price decisions and may produce original managerial suggestions which might not be revealed by traditional methods.  相似文献   

17.
Abstract.  We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerate distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicates for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c- myc expression level, is subject to measurement error.  相似文献   

18.
The small sample performance of Zeger and Liang's extended generalized linear models for the analysis of longitudinal data (Biometrics, 42,121-130,1986) is investigated for correlated gamma data. Results show that the confidence intervals do not provide desirable coverage of the true parameter due to considerably biased point estimates. Improved estimates are proposed using the jackknife procedure. Simulations performed to evaluate the proposed estimates indicate superior properties to the previous estimates.  相似文献   

19.
Heteroscedasticity generally exists when a linear regression model is applied to analyzing some real-world problems. Therefore, how to accurately estimate the variance functions of the error term in a heteroscedastic linear regression model is of great importance for obtaining efficient estimates of the regression parameters and making valid statistical inferences. A method for estimating the variance function of heteroscedastic linear regression models is proposed in this article based on the variance-reduced local linear smoothing technique. Some simulations and comparisons with other method are conducted to assess the performance of the proposed method. The results demonstrate that the proposed method can accurately estimate the variance functions and therefore produce more efficient estimates of the regression parameters.  相似文献   

20.
We consider a process that is observed as a mixture of two random distributions, where the mixing probability is an unknown function of time. The setup is built upon a wavelet‐based mixture regression. Two linear wavelet estimators are proposed. Furthermore, we consider three regularizing procedures for each of the two wavelet methods. We also discuss regularity conditions under which the consistency of the wavelet methods is attained and derive rates of convergence for the proposed estimators. A Monte Carlo simulation study is conducted to illustrate the performance of the estimators. Various scenarios for the mixing probability function are used in the simulations, in addition to a range of sample sizes and resolution levels. We apply the proposed methods to a data set consisting of array Comparative Genomic Hybridization from glioblastoma cancer studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号