首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 408 毫秒
1.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

2.
In linear mixed‐effects (LME) models, if a fitted model has more random‐effect terms than the true model, a regularity condition required in the asymptotic theory may not hold. In such cases, the marginal Akaike information criterion (AIC) is positively biased for (?2) times the expected log‐likelihood. The asymptotic bias of the maximum log‐likelihood as an estimator of the expected log‐likelihood is evaluated for LME models with balanced design in the context of parameter‐constrained models. Moreover, bias‐reduced marginal AICs for LME models based on a Monte Carlo method are proposed. The performance of the proposed criteria is compared with existing criteria by using example data and by a simulation study. It was found that the bias of the proposed criteria was smaller than that of the existing marginal AIC when a larger model was fitted and that the probability of choosing a smaller model incorrectly was decreased.  相似文献   

3.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   

4.
Elimination of a nuisance variable is often non‐trivial and may involve the evaluation of an intractable integral. One approach to evaluate these integrals is to use the Laplace approximation. This paper concentrates on a new approximation, called the partial Laplace approximation, that is useful when the integrand can be partitioned into two multiplicative disjoint functions. The technique is applied to the linear mixed model and shows that the approximate likelihood obtained can be partitioned to provide a conditional likelihood for the location parameters and a marginal likelihood for the scale parameters equivalent to restricted maximum likelihood (REML). Similarly, the partial Laplace approximation is applied to the t‐distribution to obtain an approximate REML for the scale parameter. A simulation study reveals that, in comparison to maximum likelihood, the scale parameter estimates of the t‐distribution obtained from the approximate REML show reduced bias.  相似文献   

5.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

6.
The focus of this paper is objective priors for spatially correlated data with nugget effects. In addition to the Jeffreys priors and commonly used reference priors, two types of “exact” reference priors are derived based on improper marginal likelihoods. An “equivalence” theorem is developed in the sense that the expectation of any function of the score functions of the marginal likelihood function can be taken under marginal likelihoods. Interestingly, these two types of reference priors are identical.  相似文献   

7.
It is common practice to compare the fit of non‐nested models using the Akaike (AIC) or Bayesian (BIC) information criteria. The basis of these criteria is the log‐likelihood evaluated at the maximum likelihood estimates of the unknown parameters. For the general linear model (and the linear mixed model, which is a special case), estimation is usually carried out using residual or restricted maximum likelihood (REML). However, for models with different fixed effects, the residual likelihoods are not comparable and hence information criteria based on the residual likelihood cannot be used. For model selection, it is often suggested that the models are refitted using maximum likelihood to enable the criteria to be used. The first aim of this paper is to highlight that both the AIC and BIC can be used for the general linear model by using the full log‐likelihood evaluated at the REML estimates. The second aim is to provide a derivation of the criteria under REML estimation. This aim is achieved by noting that the full likelihood can be decomposed into a marginal (residual) and conditional likelihood and this decomposition then incorporates aspects of both the fixed effects and variance parameters. Using this decomposition, the appropriate information criteria for model selection of models which differ in their fixed effects specification can be derived. An example is presented to illustrate the results and code is available for analyses using the ASReml‐R package.  相似文献   

8.
We introduce two types of graphical log‐linear models: label‐ and level‐invariant models for triangle‐free graphs. These models generalise symmetry concepts in graphical log‐linear models and provide a tool with which to model symmetry in the discrete case. A label‐invariant model is category‐invariant and is preserved after permuting some of the vertices according to transformations that maintain the graph, whereas a level‐invariant model equates expected frequencies according to a given set of permutations. These new models can both be seen as instances of a new type of graphical log‐linear model termed the restricted graphical log‐linear model, or RGLL, in which equality restrictions on subsets of main effects and first‐order interactions are imposed. Their likelihood equations and graphical representation can be obtained from those derived for the RGLL models.  相似文献   

9.
The authors discuss a class of likelihood functions involving weak assumptions on data generating mechanisms. These likelihoods may be appropriate when it is difficult to propose models for the data. The properties of these likelihoods are given and it is shown how they can be computed numerically by use of the Blahut-Arimoto algorithm. The authors then show how these likelihoods can give useful inferences using a data set for which no plausible physical model is apparent. The plausibility of the inferences is enhanced by the extensive robustness analysis these likelihoods permit.  相似文献   

10.
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale.  相似文献   

11.
Two different forms of Akaike's information criterion (AIC) are compared for selecting the smooth terms in penalized spline additive mixed models. The conditional AIC (cAIC) has been used traditionally as a criterion for both estimating penalty parameters and selecting covariates in smoothing, and is based on the conditional likelihood given the smooth mean and on the effective degrees of freedom for a model fit. By comparison, the marginal AIC (mAIC) is based on the marginal likelihood from the mixed‐model formulation of penalized splines which has recently become popular for estimating smoothing parameters. To the best of the authors' knowledge, the use of mAIC for selecting covariates for smoothing in additive models is new. In the competing models considered for selection, covariates may have a nonlinear effect on the response, with the possibility of group‐specific curves. Simulations are used to compare the performance of cAIC and mAIC in model selection settings that have correlated and hierarchical smooth terms. In moderately large samples, both formulations of AIC perform extremely well at detecting the function that generated the data. The mAIC does better for simple functions, whereas the cAIC is more sensitive to detecting a true model that has complex and hierarchical terms.  相似文献   

12.
When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests the number of states of the unobservable Markov chain can be estimated by determining the number of mixture components in the marginal distribution. This paper presents new methods for estimating the number of states in a hidden Markov model, and coincidentally the unknown number of components in a finite mixture, based on penalized quasi‐likelihood and generalized quasi‐likelihood ratio methods constructed from the marginal distribution. The procedures advocated are simple to calculate, and results obtained in empirical applications indicate that they are as effective as current available methods based on the full likelihood. Under fairly general regularity conditions, the methods proposed generate strongly consistent estimates of the unknown number of states or components.  相似文献   

13.
The authors propose pseudo‐likelihood ratio tests for selecting semiparametric multivariate copula models in which the marginal distributions are unspecified, but the copula function is parameterized and can be misspecified. For the comparison of two models, the tests differ depending on whether the two copulas are generalized nonnested or generalized nested. For more than two models, the procedure is built on the reality check test of White (2000). Unlike White (2000), however, the test statistic is automatically standardized for generalized nonnested models (with the benchmark) and ignores generalized nested models asymptotically. The authors illustrate their approach with American insurance claim data.  相似文献   

14.
Markov networks are popular models for discrete multivariate systems where the dependence structure of the variables is specified by an undirected graph. To allow for more expressive dependence structures, several generalizations of Markov networks have been proposed. Here, we consider the class of contextual Markov networks which takes into account possible context‐specific independences among pairs of variables. Structure learning of contextual Markov networks is very challenging due to the extremely large number of possible structures. One of the main challenges has been to design a score, by which a structure can be assessed in terms of model fit related to complexity, without assuming chordality. Here, we introduce the marginal pseudo‐likelihood as an analytically tractable criterion for general contextual Markov networks. Our criterion is shown to yield a consistent structure estimator. Experiments demonstrate the favourable properties of our method in terms of predictive accuracy of the inferred models.  相似文献   

15.
Composite marginal likelihoods are pseudolikelihoods constructed by compounding marginal densities. In several applications, they are convenient surrogates for the ordinary likelihood when it is too cumbersome or impractical to compute. This paper presents an overview of the topic with emphasis on applications.  相似文献   

16.
Nonlinear mixed‐effects models are being widely used for the analysis of longitudinal data, especially from pharmaceutical research. They use random effects which are latent and unobservable variables so the random‐effects distribution is subject to misspecification in practice. In this paper, we first study the consequences of misspecifying the random‐effects distribution in nonlinear mixed‐effects models. Our study is focused on Gauss‐Hermite quadrature, which is now the routine method for calculation of the marginal likelihood in mixed models. We then present a formal diagnostic test to check the appropriateness of the assumed random‐effects distribution in nonlinear mixed‐effects models, which is very useful for real data analysis. Our findings show that the estimates of fixed‐effects parameters in nonlinear mixed‐effects models are generally robust to deviations from normality of the random‐effects distribution, but the estimates of variance components are very sensitive to the distributional assumption of random effects. Furthermore, a misspecified random‐effects distribution will either overestimate or underestimate the predictions of random effects. We illustrate the results using a real data application from an intensive pharmacokinetic study.  相似文献   

17.
In this work, we introduce a class of dynamic models for time series taking values on the unit interval. The proposed model follows a generalized linear model approach where the random component, conditioned on the past information, follows a beta distribution, while the conditional mean specification may include covariates and also an extra additive term given by the iteration of a map that can present chaotic behavior. The resulting model is very flexible and its systematic component can accommodate short‐ and long‐range dependence, periodic behavior, laminar phases, etc. We derive easily verifiable conditions for the stationarity of the proposed model, as well as conditions for the law of large numbers and a Birkhoff‐type theorem to hold. A Monte Carlo simulation study is performed to assess the finite sample behavior of the partial maximum likelihood approach for parameter estimation in the proposed model. Finally, an application to the proportion of stored hydroelectrical energy in Southern Brazil is presented.  相似文献   

18.
Abstract. Frailty models with a non‐parametric baseline hazard are widely used for the analysis of survival data. However, their maximum likelihood estimators can be substantially biased in finite samples, because the number of nuisance parameters associated with the baseline hazard increases with the sample size. The penalized partial likelihood based on a first‐order Laplace approximation still has non‐negligible bias. However, the second‐order Laplace approximation to a modified marginal likelihood for a bias reduction is infeasible because of the presence of too many complicated terms. In this article, we find adequate modifications of these likelihood‐based methods by using the hierarchical likelihood.  相似文献   

19.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

20.
Abstract. This study gives a generalization of Birch's log‐linear model numerical invariance result. The generalization is given in the form of a sufficient condition for numerical invariance that is simple to verify in practice and is applicable for a much broader class of models than log‐linear models. Unlike Birch's log‐linear result, the generalization herein does not rely on any relationship between sufficient statistics and maximum likelihood estimates. Indeed the generalization does not rely on the existence of a reduced set of sufficient statistics. Instead, the concept of homogeneity takes centre stage. Several examples illustrate the utility of non‐log‐linear models, the invariance (and non‐invariance) of fitted values, and the invariance (and non‐invariance) of certain approximating distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号