首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The authors describe a method for assessing model inadequacy in maximum likelihood estimation of a generalized linear mixed model. They treat the latent random effects in the model as missing data and develop the influence analysis on the basis of a Q‐function which is associated with the conditional expectation of the complete‐data log‐likelihood function in the EM algorithm. They propose a procedure to detect influential observations in six model perturbation schemes. They also illustrate their methodology in a hypothetical situation and in two real cases.  相似文献   

2.
The authors consider a novel class of nonlinear time series models based on local mixtures of regressions of exponential family models, where the covariates include functions of lags of the dependent variable. They give conditions to guarantee consistency of the maximum likelihood estimator for correctly specified models, with stationary and nonstationary predictors. They show that consistency of the maximum likelihood estimator still holds under model misspecification. They also provide probabilistic results for the proposed model when the vector of predictors contains only lags of transformations of the modeled time series. They illustrate the consistency of the maximum likelihood estimator and the probabilistic properties via Monte Carlo simulations. Finally, they present an application using real data.  相似文献   

3.
The authors propose the local likelihood method for the time-varying coefficient additive hazards model. They use the Newton-Raphson algorithm to maximize the likelihood into which a local polynomial expansion has been incorporated. They establish the asymptotic properties for the time-varying coefficient estimators and derive explicit expressions for the variance and bias. The authors present simulation results describing the performance of their approach for finite sample sizes. Their numerical comparisons show the stability and efficiency of the local maximum likelihood estimator. They finally illustrate their proposal with data from a laryngeal cancer clinical study.  相似文献   

4.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

5.
The authors extend the classical Cormack‐Jolly‐Seber mark‐recapture model to account for both temporal and spatial movement through a series of markers (e.g., dams). Survival rates are modeled as a function of (possibly) unobserved travel times. Because of the complex nature of the likelihood, they use a Bayesian approach based on the complete data likelihood, and integrate the posterior through Markov chain Monte Carlo methods. They test the model through simulations and apply it also to actual salmon data arising from the Columbia river system. The methodology was developed for use by the Pacific Ocean Shelf Tracking (POST) project.  相似文献   

6.
A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical model comparison, the question to ask is which likelihood should be chosen for a given model. The authors employ a Bayesian framework to shed some light on qualitative comparison of the likelihoods associated with a given model. They connect these results to related issues of the effective number of parameters, penalty function, and consistent definition of a likelihood‐based model choice criterion. In particular, with a two‐stage model they show that, very generally, regardless of hyperprior specification or how much data is collected or what the realized values are, a priori, the first‐stage likelihood is expected to be smaller than the marginal likelihood. A posteriori, these expectations are reversed and the disparities worsen with increasing sample size and with increasing number of model levels.  相似文献   

7.
Gaussian mixture model-based clustering is now a standard tool to determine a hypothetical underlying structure in continuous data. However, many usual parsimonious models, despite either their appealing geometrical interpretation or their ability to deal with high dimensional data, suffer from major drawbacks due to scale dependence or unsustainability of the constraints after projection. In this work we present a new family of parsimonious Gaussian models based on a variance-correlation decomposition of the covariance matrices. These new models are stable when projected into the canonical planes and, so, faithfully representable in low dimension. They are also stable by modification of the measurement units of the data and such a modification does not change the model selection based on likelihood criteria. We highlight all these stability properties by a specific graphical representation of each model. A detailed Generalized EM (GEM) algorithm is also provided for every model inference. Then, on biological and geological data, we compare our stable models to standard ones (geometrical models and factor analyzer models), which underlines all the profit to obtain unit-free models.  相似文献   

8.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   

9.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000.  相似文献   

10.
The authors define a class of “partially linear single‐index” survival models that are more flexible than the classical proportional hazards regression models in their treatment of covariates. The latter enter the proposed model either via a parametric linear form or a nonparametric single‐index form. It is then possible to model both linear and functional effects of covariates on the logarithm of the hazard function and if necessary, to reduce the dimensionality of multiple covariates via the single‐index component. The partially linear hazards model and the single‐index hazards model are special cases of the proposed model. The authors develop a likelihood‐based inference to estimate the model components via an iterative algorithm. They establish an asymptotic distribution theory for the proposed estimators, examine their finite‐sample behaviour through simulation, and use a set of real data to illustrate their approach.  相似文献   

11.
The authors consider regression analysis for binary data collected repeatedly over time on members of numerous small clusters of individuals sharing a common random effect that induces dependence among them. They propose a mixed model that can accommodate both these structural and longitudinal dependencies. They estimate the parameters of the model consistently and efficiently using generalized estimating equations. They show through simulations that their approach yields significant gains in mean squared error when estimating the random effects variance and the longitudinal correlations, while providing estimates of the fixed effects that are just as precise as under a generalized penalized quasi‐likelihood approach. Their method is illustrated using smoking prevention data.  相似文献   

12.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

13.
Two methods of estimating the intraclass correlation coefficient (p) for the one-way random effects model were compared in several simulation experiments using balanced and unbalanced designs. Estimates based on a Bayes approach and a maximum likelihood approach were compared on the basis of their biases (differences between estimates and true values of p) and mean square errors (mean square errors of estimates of p) in each of the simulation experiments. The Bayes approach used the median of a conditional posterior density as its estimator.  相似文献   

14.
The mixture maximum likelihood approach to clustering is used to allocate treatments from a randomized complete block de-sign into relatively homogeneous groups. The implementation of this approach is straightforward for fixed but not random block effects. The density function in each underlying group is assumed to be normal and clustering is performed on the basis of the estimated posterior probabilities of group membership. A test based on the log likelihood under the mixture model can be used to assess the actual number of groups present. The tech-nique is demonstrated by using it to cluster data from a random-ized complete block experiment.  相似文献   

15.
Motivated by problems of modelling torsional angles in molecules, Singh, Hnizdo & Demchuk (2002) proposed a bivariate circular model which is a natural torus analogue of the bivariate normal distribution and a natural extension of the univariate von Mises distribution to the bivariate case. The authors present here a multivariate extension of the bivariate model of Singh, Hnizdo & Demchuk (2002). They study the conditional distributions and investigate the shapes of marginal distributions for a special case. The methods of moments and pseudo‐likelihood are considered for the estimation of parameters of the new distribution. The authors investigate the efficiency of the pseudo‐likelihood approach in three dimensions. They illustrate their methods with protein data of conformational angles  相似文献   

16.
Marginal hazard models for multivariate failure time data have been studied extensively in recent literature. However, standard hypothesis test statistics based on the likelihood method are not exactly appropriate for this kind of model. In this paper, extensions of the three commonly used likelihood hypothesis test statistics are discussed. Generalized Wald, generalized score and generalized likelihood ratio tests for hazard ratio parameters in a marginal hazard model for multivariate failure time data are proposed and their asymptotic distributions examined. The finite sample properties of these statistics are studied through simulations. The proposed method is applied to data from Busselton Population Health Surveys.  相似文献   

17.
The authors consider a weighted version of the classical likelihood that applies when the need is felt to diminish the role of some of the data in order to trade bias for precision. They propose an axiomatic derivation of the weighted likelihood, for which they show that aspects of classical theory continue to obtain. They suggest a data‐based method of selecting the weights and show that it leads to the James‐Stein estimator in various contexts. They also provide applications.  相似文献   

18.
Compared to tests for localized clusters, the tests for global clustering only collect evidence for clustering throughout the study region without evaluating the statistical significance of the individual clusters. The weighted likelihood ratio (WLR) test based on the weighted sum of likelihood ratios represents an important class of tests for global clustering. Song and Kulldorff (Likelihood based tests for spatial randomness. Stat Med. 2006;25(5):825–839) developed a wide variety of weight functions with the WLR test for global clustering. However, these weight functions are often defined based on the cell population size or the geographic information such as area size and distance between cells. They do not make use of the information from the observed count, although the likelihood ratio of a potential cluster depends on both the observed count and its population size. In this paper, we develop a self-adjusted weight function to directly allocate weights onto the likelihood ratios according to their values. The power of the test was evaluated and compared with existing methods based on a benchmark data set. The comparison results favour the suggested test especially under global chain clustering models.  相似文献   

19.
In many medical comparative studies (e.g., comparison of two treatments in an otolaryngological study), subjects may produce either bilateral (e.g., responses from a pair of ears) or unilateral (response from only one ear) data. For bilateral cases, it is meaningful to assume that the information between the two ears from the same subject are generally highly correlated. In this article, we would like to test the equality of the successful cure rates between two treatments with the presence of combined unilateral and bilateral data. Based on the dependence and independence models, we study ten test statistics which utilize both the unilateral and bilateral data. The performance of these statistics will be evaluated with respect to their empirical Type I error rates and powers under different configurations. We find that both Rosner's and Wald-type statistics based on the dependence model and constrained maximum likelihood estimates (under the null hypothesis) perform satisfactorily for small to large samples and are hence recommended. We illustrate our methodologies with a real data set from an otolaryngology study.  相似文献   

20.
Non ignorable missing data is a common problem in longitudinal studies. Latent class models are attractive for simplifying the modeling of missing data when the data are subject to either a monotone or intermittent missing data pattern. In our study, we propose a new two-latent-class model for categorical data with informative dropouts, dividing the observed data into two latent classes; one class in which the outcomes are deterministic and a second one in which the outcomes can be modeled using logistic regression. In the model, the latent classes connect the longitudinal responses and the missingness process under the assumption of conditional independence. Parameters are estimated by the method of maximum likelihood estimation based on the above assumptions and the tetrachoric correlation between responses within the same subject. We compare the proposed method with the shared parameter model and the weighted GEE model using the areas under the ROC curves in the simulations and the application to the smoking cessation data set. The simulation results indicate that the proposed two-latent-class model performs well under different missing procedures. The application results show that our proposed method is better than the shared parameter model and the weighted GEE model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号