首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recursive estimation and recursive residuals are introduced for generalised linear models (GLIM). Their definitions parallel those of normal theory regression models and relate to one of the outlier model definitions of GLIM residuals. An example illustrates their use.  相似文献   

2.
A method is described for fitting the Weibull distribution to failure-time data which may be left, right or interval censored. The method generalizes the auxiliary Poisson approach and, as such, means that it can be easily programmed in statistical packages with macro programming capabilities. Examples are given of fitting such models and an implementation in the GLIM package is used for illustration.  相似文献   

3.
This paper reviews the analysis of prospective epidemiological studies using general linear models to describe disease Incidence, It is shown that, apart from problems arising from the large size of most studies of this type, these models may be fitted by maximum likelihood (using GLIM, for example) assuming a Poisson likelihood.Alternative methods for dealing with large-scale data are discussed, and some simple procedures for dealing with common problems are outlined.The relationship of the approach to multiple logistic analyses is indicated.  相似文献   

4.
The inverse Gaussian-Poisson (two-parameter Sichel) distribution is useful in fitting overdispersed count data. We consider linear models on the mean of a response variable, where the response is in the form of counts exhibiting extra-Poisson variation, and assume an IGP error distribution. We show how maximum likelihood estimation may be carried out using iterative Newton-Raphson IRLS fitting, where GLIM is used for the IRLS part of the maximization. Approximate likelihood ratio tests are given.  相似文献   

5.
Preference decisions will usually depend on the characteristics of both the judges and the objects being judged. In the analysis of paired comparison data concerning European universities and students' characteristics, it is demonstrated how to incorporate subject-specific information into Bradley–Terry-type models. Using this information it is shown that preferences for universities and therefore university rankings are dramatically different for different groups of students. A log-linear representation of a generalized Bradley–Terry model is specified which allows simultaneous modelling of subject- and object-specific covariates and interactions between them. A further advantage of this approach is that standard software for fitting log-linear models, such as GLIM, can be used.  相似文献   

6.
This paper presents an EM algorithm for maximum likelihood estimation in generalized linear models with overdispersion. The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully non-parametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters may be sensitive to the specification of a parametric form for the mixing distribution. A listing of a GLIM4 algorithm for fitting the overdispersed binomial logit model is given in an appendix.A simple method is given for obtaining correct standard errors for parameter estimates when using the EM algorithm.Several examples are discussed.  相似文献   

7.
This paper proposes an iterative process, that can be implemented using GLIM, for fitting generalized linear models with linear inequality parameter constraints, when the maximum likelihood estimates exist and are unique. A one-step estimate is also introduced and some diagnostic measures are obtained. Finally an example is given for illustration.  相似文献   

8.
We describe how a log-linear model can be used to compute the nonparametric maximum likelihood estimate of the survival curve from interval-censored data. This permits such computation to be performed with the aid of readily available statistical software such as GLIM or SAS. The method is illustrated with reference to data from a cohort of Danish homosexual men, each of whom was tested for HIV positivity on one or more of six possible follow-up times.  相似文献   

9.
This paper considers the problem of estimating the linear parameters of a Generalised Linear Model (GLM) when the explanatory variable is subject to measurement error. In this situation the induced model for dependence on the approximate explanatory variable is not usually of GLM form. However, when the distribution of measurement error is known or estimated from replicated measurements, application of the GLIM iteratively reweighted least squares algorithm with transformed data and weighting is shown to produce maximum quasi likelihood estimates in many cases. Details of this approach are given for two particular generalized linear models; simulation results illustrate the usefulness of the theory for these models.  相似文献   

10.
A semi-parametric additive model for variance heterogeneity   总被引:1,自引:0,他引:1  
This paper presents a flexible model for variance heterogeneity in a normal error model. Specifically, both the mean and variance are modelled using semi-parametric additive models. We call this model a Mean And Dispersion Additive Model (MADAM). A successive relaxation algorithm for fitting the model is described and justified as maximizing a penalized likelihood function with penalties for lack of smoothness in the additive non-parametric functions in both mean and variance models. The algorithm is implemented in GLIM4, allowing flexible and interactive modelling of variance heterogeneity. Two data sets are used for demonstration.  相似文献   

11.
Changes in survival rates during 1940–1992 for patients with Hodgkin's disease are studied by using population-based data. The aim of the analysis is to identify when the breakthrough in clinical trials of chemotherapy treatments started to increase population survival rates, and to find how long it took for the increase to level off, indicating that the full population effect of the breakthrough had been realized. A Weibull relative survival model is used because the model parameters are easily interpretable when assessing the effect of advances in clinical trials. However, the methods apply to any relative survival model that falls within the generalized linear models framework. The model is fitted by using modifications of existing software (SAS, GLIM) and profile likelihood methods. The results are similar to those from a cause-specific analysis of the data by Feuer and co-workers. Survival started to improve around the time that a major chemotherapy breakthrough (nitrogen mustard, Oncovin, prednisone and procarbazine) was publicized in the mid 1960s but did not level off for 11 years. For the analysis of data where the cause of death is obtained from death certificates, the relative survival approach has the advantage of providing the necessary adjustment for expected mortality from causes other than the disease without requiring information on the causes of death.  相似文献   

12.
A generalized linear empirical Bayes model is developed for empirical Bayes analysis of several means in natural exponential families. A unified approach is presented for all natural exponential families with quadratic variance functions (the Normal, Poisson, Binomial, Gamma, and two others.) The hyperparameters are estimated using the extended quasi-likelihood of Nelder and Pregibon (1987), which is easily implemented via the GLIM package. The accuracy of these estimates is developed by asymptotic approximation of the variance. Two data examples are illustrated.  相似文献   

13.
Two useful statistical methods for generating a latent variable are described and extended to incorporate polytomous data and additional covariates. Item response analysis is not well-known outside its area of application, mainly because the procedures to fit the models are computer intensive and not routinely available within general statistical software packages. The linear score technique is less computer intensive, straightforward to implement and has been proposed as a good approximation to item response analysis. Both methods have been implemented in the standard statistical software package GLIM 4.0, and are compared to determine their effectiveness.  相似文献   

14.
A log-linear modelling approach is proposed for dealing with polytomous, unordered exposure variables in case-control epidemiological studies with matched pairs. Hypotheses concerning epidemiological parameters are shown to be expressable in terms of log-linear models for the expected frequencies of the case-by-control square concordance table representation of the matched data; relevant maximum likelihood estimates and goodness-of-fit statistics are presented. Possible extensions to account for ordered categorical risk factors and multiple controls are illustrated, and comparisons with previous work are discussed. Finally, the possibility of implementing the proposed method with GLIM is illustrated within the context of a data set already analyzed by other authors.  相似文献   

15.
Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.  相似文献   

16.
A Comparison of Frailty and Other Models for Bivariate Survival Data   总被引:1,自引:0,他引:1  
Multivariate survival data arise when eachstudy subject may experience multiple events or when study subjectsare clustered into groups. Statistical analyses of such dataneed to account for the intra-cluster dependence through appropriatemodeling. Frailty models are the most popular for such failuretime data. However, there are other approaches which model thedependence structure directly. In this article, we compare thefrailty models for bivariate data with the models based on bivariateexponential and Weibull distributions. Bayesian methods providea convenient paradigm for comparing the two sets of models weconsider. Our techniques are illustrated using two examples.One simulated example demonstrates model choice methods developedin this paper and the other example, based on a practical dataset of onset of blindness among patients with diabetic Retinopathy,considers Bayesian inference using different models.  相似文献   

17.
This case-study fits a variety of neural network (NN) models to the well-known air line data and compares the resulting forecasts with those obtained from the Box–Jenkins and Holt–Winters methods. Many potential problems in fitting NN models were revealed such as the possibility that the fitting routine may not converge or may converge to a local minimum. Moreover it was found that an NN model which fits well may give poor out-of-sample forecasts. Thus we think it is unwise to apply NN models blindly in 'black box' mode as has sometimes been suggested. Rather, the wise analyst needs to use traditional modelling skills to select a good NN model, e.g. to select appropriate lagged variables as the 'inputs'. The Bayesian information criterion is preferred to Akaike's information criterion for comparing different models. Methods of examining the response surface implied by an NN model are examined and compared with the results of alternative nonparametric procedures using generalized additive models and projection pursuit regression. The latter imposes less structure on the model and is arguably easier to understand.  相似文献   

18.
Time series within fields such as finance and economics are often modelled using long memory processes. Alternative studies on the same data can suggest that series may actually contain a ‘changepoint’ (a point within the time series where the data generating process has changed). These models have been shown to have elements of similarity, such as within their spectrum. Without prior knowledge this leads to an ambiguity between these two models, meaning it is difficult to assess which model is most appropriate. We demonstrate that considering this problem in a time-varying environment using the time-varying spectrum removes this ambiguity. Using the wavelet spectrum, we then use a classification approach to determine the most appropriate model (long memory or changepoint). Simulation results are presented across a number of models followed by an application to stock cross-correlations and US inflation. The results indicate that the proposed classification outperforms an existing hypothesis testing approach on a number of models and performs comparatively across others.  相似文献   

19.
We generalize Wedderburn's (1974) notion of quasi-likelihood to define a quasi-Bayesian approach for nonlinear estimation problems by allowing the full distributional assumptions about the random component in the classical Bayesian approach to be replaced by much weaker assumptions in which only the first and second moments of the prior distribution are specified. The formulas given are based on the Gauss-Newton estimating procedure and require only the first and second moments of the distributions involved. The use of GLIM package to solve for the estimation problems considered is discussed. Applications are made to estimation problems in inverse linear regression, regression models with both variables subject to error and also to the estimation of the size of animal populations. Some numerical illustrations are reported. For the inverse linear regression problem, comparisons with ordinary Bayesianand other techniques are considered.  相似文献   

20.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号