首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A test statistic proposed by Li (1999) for testing the adequacy of heteroscedastic nonlinear regression models using nonparametric kernel smoothers is applied to testing for linearity in generalized linear models. Simulation results for models with centered gamma and inverse Gaussian errors are presented to illustrate the performance of the resulting test compared with log-likelihood ratio tests for specific parametric alternatives. The test is applied to a data set of coronary heart disease status (Hosmer and Lemeshow, (1990).  相似文献   

2.
3.
Simulation-based inference for partially observed stochastic dynamic models is currently receiving much attention due to the fact that direct computation of the likelihood is not possible in many practical situations. Iterated filtering methodologies enable maximization of the likelihood function using simulation-based sequential Monte Carlo filters. Doucet et al. (2013) developed an approximation for the first and second derivatives of the log likelihood via simulation-based sequential Monte Carlo smoothing and proved that the approximation has some attractive theoretical properties. We investigated an iterated smoothing algorithm carrying out likelihood maximization using these derivative approximations. Further, we developed a new iterated smoothing algorithm, using a modification of these derivative estimates, for which we establish both theoretical results and effective practical performance. On benchmark computational challenges, this method beat the first-order iterated filtering algorithm. The method’s performance was comparable to a recently developed iterated filtering algorithm based on an iterated Bayes map. Our iterated smoothing algorithm and its theoretical justification provide new directions for future developments in simulation-based inference for latent variable models such as partially observed Markov process models.  相似文献   

4.
Some asymptotic results on generalized penalized spline smoothing   总被引:2,自引:0,他引:2  
Summary.  The paper discusses asymptotic properties of penalized spline smoothing if the spline basis increases with the sample size. The proof is provided in a generalized smoothing model allowing for non-normal responses. The results are extended in two ways. First, assuming the spline coefficients to be a priori normally distributed links the smoothing framework to generalized linear mixed models. We consider the asymptotic rates such that the Laplace approximation is justified and the resulting fits in the mixed model correspond to penalized spline estimates. Secondly, we make use of a fully Bayesian viewpoint by imposing an a priori distribution on all parameters and coefficients. We argue that with the postulated rates at which the spline basis dimension increases with the sample size the posterior distribution of the spline coefficients is approximately normal. The validity of this result is investigated in finite samples by comparing Markov chain Monte Carlo results with their asymptotic approximation in a simulation study.  相似文献   

5.
Generalized additive mixed models are proposed for overdispersed and correlated data, which arise frequently in studies involving clustered, hierarchical and spatial designs. This class of models allows flexible functional dependence of an outcome variable on covariates by using nonparametric regression, while accounting for correlation between observations by using random effects. We estimate nonparametric functions by using smoothing splines and jointly estimate smoothing parameters and variance components by using marginal quasi-likelihood. Because numerical integration is often required by maximizing the objective functions, double penalized quasi-likelihood is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the method proposed is that it allows us to make systematic inference on all model components within a unified parametric mixed model framework and can be easily implemented by fitting a working generalized linear mixed model by using existing statistical software. A bias correction procedure is also proposed to improve the performance of double penalized quasi-likelihood for sparse data. We illustrate the method with an application to infectious disease data and we evaluate its performance through simulation.  相似文献   

6.
We present a Bayesian semiparametric approach to exponential family regression that extends the class of generalized linear regression models. Further, flexibility in the process of modelling is achieved by explicitly accounting for the discrepancy between the ‘true’ response-covariate regression surface and an assumed parametric functional relationship. An approximate full Bayesian analysis is provided, based upon the Gibbs sampling algorithm.  相似文献   

7.
In this paper, we consider the four-parameter bivariate generalized exponential distribution proposed by Kundu and Gupta [Bivariate generalized exponential distribution, J. Multivariate Anal. 100 (2009), pp. 581–593] and propose an expectation–maximization algorithm to find the maximum-likelihood estimators of the four parameters under random left censoring. A numerical experiment is carried out to discuss the properties of the estimators obtained iteratively.  相似文献   

8.
Robust automatic selection techniques for the smoothing parameter of a smoothing spline are introduced. They are based on a robust predictive error criterion and can be viewed as robust versions of C p and cross-validation. They lead to smoothing splines which are stable and reliable in terms of mean squared error over a large spectrum of model distributions.  相似文献   

9.
Sasabuchi et al. (Biometrika 70(2):465–472, 1983) introduces a multivariate version of the well-known univariate isotonic regression which plays a key role in the field of statistical inference under order restrictions. His proposed algorithm for computing the multivariate isotonic regression, however, is guaranteed to converge only under special conditions (Sasabuchi et al., J Stat Comput Simul 73(9):619–641, 2003). In this paper, a more general framework for multivariate isotonic regression is given and an algorithm based on Dykstra’s method is used to compute the multivariate isotonic regression. Two numerical examples are given to illustrate the algorithm and to compare the result with the one published by Fernando and Kulatunga (Comput Stat Data Anal 52:702–712, 2007).  相似文献   

10.
In nonparametric regression the smoothing parameter can be selected by minimizing a Mean Squared Error (MSE) based criterion. For spline smoothing one can also rewrite the smooth estimation as a Linear Mixed Model where the smoothing parameter appears as the a priori variance of spline basis coefficients. This allows to employ Maximum Likelihood (ML) theory to estimate the smoothing parameter as variance component. In this paper the relation between the two approaches is illuminated for penalized spline smoothing (P-spline) as suggested in Eilers and Marx Statist. Sci. 11(2) (1996) 89. Theoretical and empirical arguments are given showing that the ML approach is biased towards undersmoothing, i.e. it chooses a too complex model compared to the MSE. The result is in line with classical spline smoothing, even though the asymptotic arguments are different. This is because in P-spline smoothing a finite dimensional basis is employed while in classical spline smoothing the basis grows with the sample size.  相似文献   

11.
12.
Spline smoothing is a popular technique for curve fitting, in which selection of the smoothing parameter is crucial. Many methods such as Mallows’ Cp, generalized maximum likelihood (GML), and the extended exponential (EE) criterion have been proposed to select this parameter. Although Cp is shown to be asymptotically optimal, it is usually outperformed by other selection criteria for small to moderate sample sizes due to its high variability. On the other hand, GML and EE are more stable than Cp, but they do not possess the same asymptotic optimality as Cp. Instead of selecting this smoothing parameter directly using Cp, we propose to select among a small class of selection criteria based on Stein's unbiased risk estimate (SURE). Due to the selection effect, the spline estimate obtained from a criterion in this class is nonlinear. Thus, the effective degrees of freedom in SURE contains an adjustment term in addition to the trace of the smoothing matrix, which cannot be ignored in small to moderate sample sizes. The resulting criterion, which we call adaptive Cp, is shown to have an analytic expression, and hence can be efficiently computed. Moreover, adaptive Cp is not only demonstrated to be superior and more stable than commonly used selection criteria in a simulation study, but also shown to possess the same asymptotic optimality as Cp.  相似文献   

13.
The Kaplan–Meier (KM) estimator is ubiquitously used for estimating survival functions, but it provides only a discrete approximation at the observation times and does not deliver a proper distribution if the largest observation is censored. Using KM as a starting point, we devise an empirical saddlepoint approximation‐based method for producing a smooth survival function that is unencumbered by choice of tuning parameters. The procedure inverts the moment generating function (MGF) defined through a Riemann–Stieltjes integral with respect to an underlying mixed probability measure consisting of the discrete KM mass function weights and an absolutely continuous exponential right‐tail completion. Uniform consistency, and weak and strong convergence results are established for the resulting MGF and its derivatives, thus validating their usage as inputs into the saddlepoint routines. Relevant asymptotic results are also derived for the density and distribution function estimates. The performance of the resulting survival approximations is examined in simulation studies, which demonstrate a favourable comparison with the log spline method (Kooperberg & Stone, 1992) in small sample settings. For smoothing survival functions we argue that the methodology has no immediate competitors in its class, and we illustrate its application on several real data sets. The Canadian Journal of Statistics 47: 238–261; 2019 © 2019 Statistical Society of Canada  相似文献   

14.
We consider the problem of the computation of smoothed additive functionals, which are some integrals with respect to the joint smoothing distribution. It is a key issue in inference for general state-space models as these quantities appear naturally for maximum likelihood parameter inference. The computation of smoothed additive functionals is very challenging as exact computations are not possible for non-linear non-Gaussian state-space models. It becomes even more difficult when the hidden state lies in a high dimensional space because traditional numerical methods suffer from the curse of dimensionality. We propose a new algorithm to efficiently calculate the smoothed additive functionals in an online manner for a specific family of high-dimensional state-space models in discrete time, which is named the Space–Time Forward Smoothing (STFS) algorithm. The cost of this algorithm is at least O(N2d2T), which is polynomial in d. T and N denote the number of time steps and the number of particles respectively, while d is the dimension of the hidden state space. Its superior performance over other existing methods is illustrated by various simulation studies. Moreover, STFS algorithm is successfully applied to perform Maximum Likelihood estimation for static model parameters both in an online and an offline manner.  相似文献   

15.
We identify a role for smooth curve provision in the finite population context. The performance of kernel density estimates in this scenario is explored, and they are tailored to the finite population situation especially by developing a method of data-based selection of the smoothing parameter appropriate to this problem. Simulated examples are given, including some from the particular context of permutation distributions which first motivated this investigation.  相似文献   

16.
Parametric and permutation testing for multivariate monotonic alternatives   总被引:1,自引:0,他引:1  
We are firstly interested in testing the homogeneity of k mean vectors against two-sided restricted alternatives separately in multivariate normal distributions. This problem is a multivariate extension of Bartholomew (in Biometrica 46:328–335, 1959b) and an extension of Sasabuchi et al. (in Biometrica 70:465–472, 1983) and Kulatunga and Sasabuchi (in Mem. Fac. Sci., Kyushu Univ. Ser. A: Mathematica 38:151–161, 1984) to two-sided ordered hypotheses. We examine the problem of testing under two separate cases. One case is that covariance matrices are known, the other one is that covariance matrices are unknown but common. For the general case that covariance matrices are known the test statistic is obtained using the likelihood ratio method. When the known covariance matrices are common and diagonal, the null distribution of test statistic is derived and its critical values are computed at different significance levels. A Monte Carlo study is also presented to estimate the power of the test. A test statistic is proposed for the case when the common covariance matrices are unknown. Since it is difficult to compute the exact p-value for this problem of testing with the classical method when the covariance matrices are completely unknown, we first present a reformulation of the test statistic based on the orthogonal projections on the closed convex cones and then determine the upper bounds for its p-values. Also we provide a general nonparametric solution based on the permutation approach and nonparametric combination of dependent tests.  相似文献   

17.
Bilinear models in which the expectation of a two-way array is the sum of products of parameters are widely used in spectroscopy. In this paper we present an algorithm called combined-vector successive overrelaxation (COV-SOR) for bilinear models, and compare it with methods like alternating least squares, singular value decomposition, and the Marquardt procedure. Comparisons are done for missing data also.  相似文献   

18.
Detecting local spatial clusters for count data is an important task in spatial epidemiology. Two broad approaches—moving window and disease mapping methods—have been suggested in some of the literature to find clusters. However, the existing methods employ somewhat arbitrarily chosen tuning parameters, and the local clustering results are sensitive to the choices. In this paper, we propose a penalized likelihood method to overcome the limitations of existing local spatial clustering approaches for count data. We start with a Poisson regression model to accommodate any type of covariates, and formulate the clustering problem as a penalized likelihood estimation problem to find change points of intercepts in two-dimensional space. The cost of developing a new algorithm is minimized by modifying an existing least absolute shrinkage and selection operator algorithm. The computational details on the modifications are shown, and the proposed method is illustrated with Seoul tuberculosis data.  相似文献   

19.
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes.  相似文献   

20.
The simplification of complex models which were originally envisaged to explain some data is considered as a discrete form of smoothing. In this sense data based model selection techniques lead to minimal and unavoidable initial smoothing. The same techniques may also be used for further smoothing if this seems necessary. For deterministic data parametric models which are usually used for stochastic data also provide convenient notches in the process of smoothing. The usual discrepancies can be used to measure the degree of smoothing. The methods for tables of means and tables of frequencies are described in more detail and examples of applications are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号