首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 807 毫秒
1.
Although generalized linear mixed models are recognized to be of major practical importance, it is also known that they can be computationally demanding. The problem is the evaluation of the integral in calculating the marginalized likelihood. The straightforward method is based on the Gauss–Hermite technique, based on Gaussian quadrature points. Another approach is provided by the class of penalized quasi-likelihood methods. It is commonly believed that the Gauss–Hermite method works relatively well in simple situations but fails in more complicated structures. However, we present here a strikingly simple example of a logistic random-intercepts model in the context of a longitudinal clinical trial where the method gives valid results only for a high number of quadrature points ( Q ). As a consequence, this result warns the practitioner to examine routinely the dependence of the results on Q . The adaptive Gaussian quadrature, as implemented in the new SAS procedure NLMIXED, offered the solution to our problem. However, even the adaptive version of Gaussian quadrature needs careful handling to ensure convergence.  相似文献   

2.
This paper introduces a new approach, based on dependent univariate GLMs, for fitting multivariate mixture models. This approach is a multivariate generalization of the method for univariate mixtures presented by Hinde (1982). Its accuracy and efficiency are compared with direct maximization of the log-likelihood. Using a simulation study, we also compare the efficiency of Monte Carlo and Gaussian quadrature methods for approximating the mixture distribution. The new approach with Gaussian quadrature outperforms the alternative methods considered. The work is motivated by the multivariate mixture models which have been proposed for modelling changes of employment states at an individual level. Similar formulations are of interest for modelling movement between other social and economic states and multivariate mixture models also occur in biostatistics and epidemiology.  相似文献   

3.
By means of a fractional factorial simulation experiment, we compare the performance of penalised quasi-likelihood (PQL), non-adaptive Gaussian quadrature and adaptive Gaussian quadrature in estimating parameters for multilevel logistic regression models. The comparison is done in terms of bias, mean-squared error (MSE), numerical convergence and computational efficiency. It turns out that in terms of MSE, standard versions of the quadrature methods perform relatively poorly in comparison with PQL.  相似文献   

4.
The importance of discrete spatial models cannot be overemphasized, especially when measuring living standards. The battery of measurements is generally categorical with nearer geo-referenced observations featuring stronger dependencies. This study presents a Clipped Gaussian Geo-Classification (CGG-C) model for spatially-dependent ordered data, and compares its performance with existing methods to classify household poverty using Ghana living standards survey (GLSS 6) data. Bayesian inference was performed on data sampled by MCMC. Model evaluation was based on measures of classification and prediction accuracy. Spatial associations, given some household features, were quantified, and a poverty classification map for Ghana was developed. Overall, the results of estimation showed that many of the statistically significant covariates were generally strongly related with the ordered response variable. Households at specific locations tended to uniformly experience specific levels of poverty, thus, providing an empirical spatial character of poverty in Ghana. A comparative analysis of validation results showed that the CGG-C model (with 14.2% misclassification rate) outperformed the Cumulative Probit (CP) model with misclassification rate of 17.4%. This approach to poverty analysis is relevant for policy design and the implementation of cost-effective programmes to reduce category and site-specific poverty incidence, and monitor changes in both category and geographical trends thereof.KEYWORDS: Ordered responses, spatial correlation, Bayesian estimation via MCMC, Gaussian random fields, poverty classification  相似文献   

5.
Recently in Dutt (1973, (1975), intgral representations over (0,A) were obtained for upper and lover multivariate normal and the probilities. It was pointed out that these integral representaitons when evaluated by Gauss-Hermite uadrature yield rapid and accurate numerical results.

Here integral representaitons, based on an integral formula due to Gurland (1948), are indicated for arbitrary multivariate probabilities. Application of this general representaion for computing multivariate x2 probabilities is discussed and numerical results using Gaussian quadrature are given for the bivariate and equicorre lated trivariate cases. Applications to the multivariate densities studied by Miller (1965) are also included  相似文献   

6.
Routine implementation of the Bayesian paradigm requires an efficient approach to the calculation and display of posterior or predictive distributions for given likelihood and prior specifi- cations. In this paper we shall review some of the analytic and numerical approaches currently available, describing in detail a numerical integration strategy based on Gaussian quadrature, and an associated strategy for the reconstruction and display of distributions based on spline techniques.  相似文献   

7.
Random effects models have been playing a critical role for modelling longitudinal data. However, there are little studies on the kernel-based maximum likelihood method for semiparametric random effects models. In this paper, based on kernel and likelihood methods, we propose a pooled global maximum likelihood method for the partial linear random effects models. The pooled global maximum likelihood method employs the local approximations of the nonparametric function at a group of grid points simultaneously, instead of one point. Gaussian quadrature is used to approximate the integration of likelihood with respect to random effects. The asymptotic properties of the proposed estimators are rigorously studied. Simulation studies are conducted to demonstrate the performance of the proposed approach. We also apply the proposed method to analyse correlated medical costs in the Medical Expenditure Panel Survey data set.  相似文献   

8.
The main goal of the paper is to specify a suitable multivariate multilevel model for polytomous responses with a non-ignorable missing data mechanism in order to determine the factors which influence the way of acquisition of the skills of the graduates and to evaluate the degree programmes on the basis of the adequacy of the skills they give to their graduates. The application is based on data gathered by a telephone survey conducted, about two years after the degree, on the graduates of year 2000 of the University of Florence. A multilevel multinomial logit model for the response of interest is fitted simultaneously with a multilevel logit model for the selection mechanism by means of maximum likelihood with adaptive Gaussian quadrature. In the application the multilevel structure has a crucial role, while selection bias results negligible. The analysis of the empirical Bayes residuals allows to detect some extreme degree programmes to be further inspected.  相似文献   

9.
本文以分时阶梯定价为例,在人口老龄化及生育政策逐渐放宽的背景下,通过建立结构计量模型,实证分析了引入家庭人口特征后,非线性定价对收入再分配效应这一政策目标的影响。通过构建引入家庭人口特征的二次近乎理想需求(QUAIDS)函数,基于相对等价补偿方法建立收入再分配效应调整的测度模型。估计了为保持相同效用增加家庭不同类型的人口需要增加的电费补偿率及其金额。研究结论表明,引入家庭人口特征后收入再分配效应得到了强化;家庭人口特征对消费者行为选择有显著影响;不同人口规模的家庭的电费补贴结构存在明显差异;动态分析显示,随着预算水平的提高,家庭人口特征对收入再分配效应的影响在减小。  相似文献   

10.
Longitudinal studies often entail categorical outcomes as primary responses. When dropout occurs, non-ignorability is frequently accounted for through shared parameter models (SPMs). In this context, several extensions from Gaussian to non-Gaussian longitudinal processes have been proposed. In this paper, we formulate an approach for non-Gaussian longitudinal outcomes in the framework of joint models. As an extension of SPMs, based on shared latent effects, we assume that the history of the response up to current time may have an influence on the risk of dropout. This history is represented by the current, expected, value of the response. Since the time a subject spends in the study is continuous, we parametrize the dropout process through a proportional hazard model. The resulting model is referred to as Generalized Linear Mixed Joint Model (GLMJM). To estimate model parameters, we adopt a maximum likelihood approach via the EM algorithm. In this context, the maximization of the observed data log-likelihood requires numerical integration over the random effect posterior distribution, which is usually not straightforward; under the assumption of Gaussian random effects, we compare Gauss-Hermite and Pseudo-Adaptive Gaussian quadrature rules. We investigate in a simulation study the behaviour of parameter estimates in the case of Poisson and Binomial longitudinal responses, and apply the GLMJM to a benchmark dataset.  相似文献   

11.
ABSTRACT

Longitudinal studies often entail non-Gaussian primary responses. When dropout occurs, potential non-ignorability of the missingness process may occur, and a joint model for the primary response and a time-to-event may represent an appealing tool to account for dependence between the two processes. As an extension to the GLMJM, recently proposed, and based on Gaussian latent effects, we assume that the random effects follow a smooth, P-spline based density. To estimate model parameters, we adopt a two-step conditional Newton–Raphson algorithm. Since the maximization of the penalized log-likelihood requires numerical integration over the random effect, which is often cumbersome, we opt for a pseudo-adaptive Gaussian quadrature rule to approximate the model likelihood. We discuss the proposed model by analyzing an original dataset on dilated cardiomyopathies and through a simulation study.  相似文献   

12.
The zero truncated inverse Gaussian–Poisson model, obtained by first mixing the Poisson model assuming its expected value has an inverse Gaussian distribution and then truncating the model at zero, is very useful when modelling frequency count data. A Bayesian analysis based on this statistical model is implemented on the word frequency counts of various texts, and its validity is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The analysis based on this model is useful because it allows one to use the posterior distribution of the model mixing density as an approximation of the posterior distribution of the density of the word frequencies of the vocabulary of the author, which is useful to characterize the style of that author. The posterior distribution of the expectation and of measures of the variability of that mixing distribution can be used to assess the size and diversity of his vocabulary. An alternative analysis is proposed based on the inverse Gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. Even though this second model fits some of the word frequency data sets more accurately than the first model, in practice the analysis based on it is not as useful because it does not allow one to estimate the word frequency distribution of the vocabulary.  相似文献   

13.
This paper proposes a method for estimating the parameters in a generalized linear model with missing covariates. The missing covariates are assumed to come from a continuous distribution, and are assumed to be missing at random. In particular, Gaussian quadrature methods are used on the E-step of the EM algorithm, leading to an approximate EM algorithm. The parameters are then estimated using the weighted EM procedure given in Ibrahim (1990). This approximate EM procedure leads to approximate maximum likelihood estimates, whose standard errors and asymptotic properties are given. The proposed procedure is illustrated on a data set.  相似文献   

14.
Multiple comparison methods are widely implemented in statistical packages and heavily used. To obtain the critical value of a multiple comparison method for a given confidence level, a double integral equation must be solved. Current computer implementations evaluate one double integral for each candidate critical value using Gaussian quadrature. Consequently, iterative refinement of the critical value can slow the response time enough to hamper interactive data analysis. However, for balanced designs, to obtain the critical value for multiple comparisons with the best, subset selection, and one-sided multiple comparison with a control, if one regards the inner integral as a function of the outer integration variable, then this function can be obtained by discrete convolution using the Fast Fourier Transform (FFT). Exploiting the fact that this function need not be re-evaluated during iterative refinement of the critical value, it is shown that the FFT method obtains critical values at least four times as accurate and two to five times as fast as the Gaussian quadrature method.  相似文献   

15.
Mixed models are regularly used in the analysis of clustered data, but are only recently being used for imputation of missing data. In household surveys where multiple people are selected from each household, imputation of missing values should preserve the structure pertaining to people within households and should not artificially change the apparent intracluster correlation (ICC). This paper focuses on the use of multilevel models for imputation of missing data in household surveys. In particular, the performance of a best linear unbiased predictor for both stochastic and deterministic imputation using a linear mixed model is compared to imputation based on a single level linear model, both with and without information about household respondents. In this paper an evaluation is carried out in the context of imputing hourly wage rate in the Household, Income and Labour Dynamics of Australia Survey. Nonresponse is generated under various assumptions about the missingness mechanism for persons and households, and with low, moderate and high intra‐household correlation to assess the benefits of the multilevel imputation model under different conditions. The mixed model and single level model with information about the household respondent lead to clear improvements when the ICC is moderate or high, and when there is informative missingness.  相似文献   

16.
The inverse Gaussian (IG) distribution is often applied in statistical modelling, especially with lifetime data. We present tests for outlying values of the parameters (μ, λ) of this distribution when data are available from a sample of independent units and possibly with more than one event per unit. Outlier tests are constructed from likelihood ratio tests for equality of parameters. The test for an outlying value of λ is based on an F-distributed statistic that is transformed to an approximate normal statistic when there are unequal numbers of events per unit. Simulation studies are used to confirm that Bonferroni tests have accurate size and to examine the powers of the tests. The application to first hitting time models, where the IG distribution is derived from an underlying Wiener process, is described. The tests are illustrated on data concerning the strength of different lots of insulating material.  相似文献   

17.
A full Bayesian approach based on ordinary differential equation (ODE)-penalized B-splines and penalized Gaussian mixture is proposed to jointly estimate ODE-parameters, state function and error distribution from the observation of some state functions involved in systems of affine differential equations. Simulations inspired by pharmacokinetic (PK) studies show that the proposed method provides comparable results to the method based on the standard ODE-penalized B-spline approach (i.e. with the Gaussian error distribution assumption) and outperforms the standard ODE-penalized B-splines when the distribution is not Gaussian. This methodology is illustrated on a PK data set.  相似文献   

18.
The present study proposes a method to estimate the yield of a crop. The proposed Gaussian quadrature (GQ) method makes it possible to estimate the crop yield from a smaller subsample. Identification of plots and corresponding weights to be assigned to the yield of plots comprising a subsample is done with the help of information about the full sample on certain auxiliary variables relating to biometrical characteristics of the plant. Computational experience reveals that the proposed method leads to about 78% reduction in sample size with absolute percentage error of 2.7%. Performance of the proposed method has been compared with that of random sampling on the basis of the values of average absolute percentage error and standard deviation of yield estimates obtained from 40 samples of comparable size. Interestingly, average absolute percentage error as well as standard deviation is considerably smaller for the GQ estimates than for the random sample estimates. The proposed method is quite general and can be applied for other crops as well-provided information on auxiliary variables relating to yield contributing biometrical characteristics is available.  相似文献   

19.
We study the problem of selecting a regularization parameter in penalized Gaussian graphical models. When the goal is to obtain a model with good predictive power, cross-validation is the gold standard. We present a new estimator of Kullback–Leibler loss in Gaussian Graphical models which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross-validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross-validation, such as Akaike's information criterion and Generalized approximate cross-validation. We also show that the estimator can be used to improve the performance of the Bayesian information criterion when the sample size is small.  相似文献   

20.
We propose and study by means of simulations and graphical tools a class of goodness-of-fit tests for ARCH models. The tests are based on the empirical distribution function of squared residuals and smooth (parametric) bootstrap. We examine empirical size and power by means of a simulation study. While the tests have overall correct size, their power strongly depends on the type of alternative and is particularly high when the assumption of Gaussian innovations is violated. As an example, the tests are applied to returns on Foreign Exchange rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号