首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical model comparison, the question to ask is which likelihood should be chosen for a given model. The authors employ a Bayesian framework to shed some light on qualitative comparison of the likelihoods associated with a given model. They connect these results to related issues of the effective number of parameters, penalty function, and consistent definition of a likelihood‐based model choice criterion. In particular, with a two‐stage model they show that, very generally, regardless of hyperprior specification or how much data is collected or what the realized values are, a priori, the first‐stage likelihood is expected to be smaller than the marginal likelihood. A posteriori, these expectations are reversed and the disparities worsen with increasing sample size and with increasing number of model levels.  相似文献   

2.
The authors extend the classical Cormack‐Jolly‐Seber mark‐recapture model to account for both temporal and spatial movement through a series of markers (e.g., dams). Survival rates are modeled as a function of (possibly) unobserved travel times. Because of the complex nature of the likelihood, they use a Bayesian approach based on the complete data likelihood, and integrate the posterior through Markov chain Monte Carlo methods. They test the model through simulations and apply it also to actual salmon data arising from the Columbia river system. The methodology was developed for use by the Pacific Ocean Shelf Tracking (POST) project.  相似文献   

3.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

4.
In survival data analysis, the interval censoring problem has generally been treated via likelihood methods. Because this likelihood is complex, it is often assumed that the censoring mechanisms do not affect the mortality process. The authors specify conditions that ensure the validity of such a simplified likelihood. They prove the equivalence between different characterizations of noninformative censoring and define a constant‐sum condition analogous to the one derived in the context of right censoring. They also prove that when the noninformative or constant‐sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator of the death time distribution function.  相似文献   

5.
The authors define a class of “partially linear single‐index” survival models that are more flexible than the classical proportional hazards regression models in their treatment of covariates. The latter enter the proposed model either via a parametric linear form or a nonparametric single‐index form. It is then possible to model both linear and functional effects of covariates on the logarithm of the hazard function and if necessary, to reduce the dimensionality of multiple covariates via the single‐index component. The partially linear hazards model and the single‐index hazards model are special cases of the proposed model. The authors develop a likelihood‐based inference to estimate the model components via an iterative algorithm. They establish an asymptotic distribution theory for the proposed estimators, examine their finite‐sample behaviour through simulation, and use a set of real data to illustrate their approach.  相似文献   

6.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

7.
Motivated by problems of modelling torsional angles in molecules, Singh, Hnizdo & Demchuk (2002) proposed a bivariate circular model which is a natural torus analogue of the bivariate normal distribution and a natural extension of the univariate von Mises distribution to the bivariate case. The authors present here a multivariate extension of the bivariate model of Singh, Hnizdo & Demchuk (2002). They study the conditional distributions and investigate the shapes of marginal distributions for a special case. The methods of moments and pseudo‐likelihood are considered for the estimation of parameters of the new distribution. The authors investigate the efficiency of the pseudo‐likelihood approach in three dimensions. They illustrate their methods with protein data of conformational angles  相似文献   

8.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

9.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

10.
The authors consider a weighted version of the classical likelihood that applies when the need is felt to diminish the role of some of the data in order to trade bias for precision. They propose an axiomatic derivation of the weighted likelihood, for which they show that aspects of classical theory continue to obtain. They suggest a data‐based method of selecting the weights and show that it leads to the James‐Stein estimator in various contexts. They also provide applications.  相似文献   

11.
Quantitative cancer dose-response models play an important role in cancer risk assessment. They also play a role in regulatory processes associated with potential occupational or environmental exposures. The multistage model is currently the most widely used cancer dose-response model. This paper describes the construction of the likelihood function in the special case of the multistage cancer dose-response models. The concavity of the likelihood function is also established. A criterion is developed to determine the degree of the polynomial portion of the multistage model. Finally, the restricted and unrestricted maximum likelihood estimators are considered and applied to some experimental data sets.  相似文献   

12.
The linear regression model for right censored data, also known as the accelerated failure time model using the logarithm of survival time as the response variable, is a useful alternative to the Cox proportional hazards model. Empirical likelihood as a non‐parametric approach has been demonstrated to have many desirable merits thanks to its robustness against model misspecification. However, the linear regression model with right censored data cannot directly benefit from the empirical likelihood for inferences mainly because of dependent elements in estimating equations of the conventional approach. In this paper, we propose an empirical likelihood approach with a new estimating equation for linear regression with right censored data. A nested coordinate algorithm with majorization is used for solving the optimization problems with non‐differentiable objective function. We show that the Wilks' theorem holds for the new empirical likelihood. We also consider the variable selection problem with empirical likelihood when the number of predictors can be large. Because the new estimating equation is non‐differentiable, a quadratic approximation is applied to study the asymptotic properties of penalized empirical likelihood. We prove the oracle properties and evaluate the properties with simulated data. We apply our method to a Surveillance, Epidemiology, and End Results small intestine cancer dataset.  相似文献   

13.
This paper deals with a longitudinal semi‐parametric regression model in a generalised linear model setup for repeated count data collected from a large number of independent individuals. To accommodate the longitudinal correlations, we consider a dynamic model for repeated counts which has decaying auto‐correlations as the time lag increases between the repeated responses. The semi‐parametric regression function involved in the model contains a specified regression function in some suitable time‐dependent covariates and a non‐parametric function in some other time‐dependent covariates. As far as the inference is concerned, because the non‐parametric function is of secondary interest, we estimate this function consistently using the independence assumption‐based well‐known quasi‐likelihood approach. Next, the proposed longitudinal correlation structure and the estimate of the non‐parametric function are used to develop a semi‐parametric generalised quasi‐likelihood approach for consistent and efficient estimation of the regression effects in the parametric regression function. The finite sample performance of the proposed estimation approach is examined through an intensive simulation study based on both large and small samples. Both balanced and unbalanced cluster sizes are incorporated in the simulation study. The asymptotic performances of the estimators are given. The estimation methodology is illustrated by reanalysing the well‐known health care utilisation data consisting of counts of yearly visits to a physician by 180 individuals for four years and several important primary and secondary covariates.  相似文献   

14.
The authors study a varying‐coefficient regression model in which some of the covariates are measured with additive errors. They find that the usual local linear estimator (LLE) of the coefficient functions is biased and that the usual correction for attenuation fails to work. They propose a corrected LLE and show that it is consistent and asymptotically normal, and they also construct a consistent estimator for the model error variance. They then extend the generalized likelihood technique to develop a goodness of fit test for the model. They evaluate these various procedures through simulation studies and use them to analyze data from the Framingham Heart Study.  相似文献   

15.
The authors consider regression analysis for binary data collected repeatedly over time on members of numerous small clusters of individuals sharing a common random effect that induces dependence among them. They propose a mixed model that can accommodate both these structural and longitudinal dependencies. They estimate the parameters of the model consistently and efficiently using generalized estimating equations. They show through simulations that their approach yields significant gains in mean squared error when estimating the random effects variance and the longitudinal correlations, while providing estimates of the fixed effects that are just as precise as under a generalized penalized quasi‐likelihood approach. Their method is illustrated using smoking prevention data.  相似文献   

16.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

17.
The authors propose a robust bounded‐influence estimator for binary regression with continuous outcomes, an alternative to logistic regression when the investigator's interest focuses on the proportion of subjects who fall below or above a cut‐off value. The authors show both theoretically and empirically that in this context, the maximum likelihood estimator is sensitive to model misspecifications. They show that their robust estimator is more stable and nearly as efficient as maximum likelihood when the hypotheses are satisfied. Moreover, it leads to safer inference. The authors compare the different estimators in a simulation study and present an analysis of hypertension on Harlem survey data.  相似文献   

18.
Right, left or interval censored multivariate data can be represented by an intersection graph. Focussing on the bivariate case, the authors relate the structure of such an intersection graph to the support of the nonparametric maximum likelihood estimate (NPMLE) of the cumulative distribution function (CDF) for such data. They distinguish two types of non‐uniqueness of the NPMLE: representational, arising when the likelihood is unaffected by the distribution of the estimated probability mass within regions, and mixture, arising when the masses themselves are not unique. The authors provide a brief overview of estimation techniques and examine three data sets.  相似文献   

19.
Non-Gaussian spatial responses are usually modeled using spatial generalized linear mixed model with spatial random effects. The likelihood function of this model cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. There are numerical ways to maximize the likelihood function, such as Monte Carlo Expectation Maximization and Quadrature Pairwise Expectation Maximization algorithms. They can be applied but may in such cases be computationally very slow or even prohibitive. Gauss–Hermite quadrature approximation only suitable for low-dimensional latent variables and its accuracy depends on the number of quadrature points. Here, we propose a new approximate pairwise maximum likelihood method to the inference of the spatial generalized linear mixed model. This approximate method is fast and deterministic, using no sampling-based strategies. The performance of the proposed method is illustrated through two simulation examples and practical aspects are investigated through a case study on a rainfall data set.  相似文献   

20.
There exists a recent study where dynamic mixed‐effects regression models for count data have been extended to a semi‐parametric context. However, when one deals with other discrete data such as binary responses, the results based on count data models are not directly applicable. In this paper, we therefore begin with existing binary dynamic mixed models and generalise them to the semi‐parametric context. For inference, we use a new semi‐parametric conditional quasi‐likelihood (SCQL) approach for the estimation of the non‐parametric function involved in the semi‐parametric model, and a semi‐parametric generalised quasi‐likelihood (SGQL) approach for the estimation of the main regression, dynamic dependence and random effects variance parameters. A semi‐parametric maximum likelihood (SML) approach is also used as a comparison to the SGQL approach. The properties of the estimators are examined both asymptotically and empirically. More specifically, the consistency of the estimators is established and finite sample performances of the estimators are examined through an intensive simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号