首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The author describes the relationship between the extended generalized estimating equations (EGEEs) of Hall & Severini (1998) and various similar methods. He proposes a true extended quasi‐likelihood approach for the clustered data case and explores restricted maximum likelihood‐like versions of the EGEE and extended quasi‐likelihood estimating equations. He also presents simulation results comparing the various estimators in terms of mean squared error of estimation based on three moderate sample size, discrete data situations.  相似文献   

2.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

3.
Motivated by a recent tuberculosis (TB) study, this paper is concerned with covariates missing not at random (MNAR) and models the potential intracluster correlation by a frailty. We consider the regression analysis of right‐censored event times from clustered subjects under a Cox proportional hazards frailty model and present the semiparametric maximum likelihood estimator (SPMLE) of the model parameters. An easy‐to‐implement pseudo‐SPMLE is then proposed to accommodate more realistic situations using readily available supplementary information on the missing covariates. Algorithms are provided to compute the estimators and their consistent variance estimators. We demonstrate that both the SPMLE and the pseudo‐SPMLE are consistent and asymptotically normal by the arguments based on the theory of modern empirical processes. The proposed approach is examined numerically via simulation and illustrated with an analysis of the motivating TB study data.  相似文献   

4.
We consider the corrective approach (Theoretical Statistics, Chapman & Hall, London, 1974, p. 310) and preventive approach (Biometrica 80 (1993) 27) to bias reduction of maximum likelihood estimators under the logistic regression model based on case–control data. The proposed bias-corrected maximum likelihood estimators are based on the semiparametric profile log likelihood function under a two-sample semiparametric model, which is equivalent to the assumed logistic regression model. We show that the prospective and retrospective analyses on the basis of the corrective approach to bias reduction produce identical bias-corrected maximum likelihood estimators of the odds ratio parameter, but this does not hold when using the preventive approach unless the case and control sample sizes are identical. We present some results on simulation and on the analysis of two real data sets.  相似文献   

5.
We develop the empirical likelihood approach for a class of vector‐valued, not necessarily Gaussian, stationary processes with unknown parameters. In time series analysis, it is known that the Whittle likelihood is one of the most fundamental tools with which to obtain a good estimator of unknown parameters, and that the score functions are asymptotically normal. Motivated by the Whittle likelihood, we apply the empirical likelihood approach to its derivative with respect to unknown parameters. We also consider the empirical likelihood approach to minimum contrast estimation based on a spectral disparity measure, and apply the approach to the derivative of the spectral disparity. This paper provides rigorous proofs on the convergence of our two empirical likelihood ratio statistics to sums of gamma distributions. Because the fitted spectral model may be different from the true spectral structure, the results enable us to construct confidence regions for various important time series parameters without assuming specified spectral structures and the Gaussianity of the process.  相似文献   

6.
Two equivalent methods (gene counting and maximum likelihood) for estimating gene frequencies in a general genetic marker system based on observed phenotype data are derived. Under the maximum likelihood approach, an expression is given for the estimated covariance matrix from which estimated standard errors of the estimators can be found. In addition, consideration is given to the problem of estimating gene frequencies when there are available several independent population data sets.  相似文献   

7.
In many applications, decisions are made on the basis of function of parameters g(θ). When the value of g(theta;) is calculated using estimated values for te parameters, its is important to have a measure of the uncertainty associated with that value of g(theta;). Likelihood ratio approaches to finding likelihood intervals for functions of parameters have been shown to be more reliable, in terms of coverage probability, than the linearization approach. Two approaches to the generalization of the profiling algorithm have been proposed in the literature to enable construction of likelihood intervals for a function of parameters (Chen and Jennrich, 1996; Bates and Watts, 1988). In this paper we show the equivalence of these two methods. We also provide and analysis of cases in which neither profiling algorithm is appropriate. For one of these cases an alternate approach is suggested Whereas generalized profiling is based on maximizing the likelihood function given a constraint on the value of g(θ), the alternative algorithm is based on optimizing g(θ) given a constraint on the value of the likelihood function.  相似文献   

8.
Abstract: The authors derive empirical likelihood confidence regions for the comparison distribution of two populations whose distributions are to be tested for equality using random samples. Another application they consider is to ROC curves, which are used to compare measurements of a diagnostic test from two populations. The authors investigate the smoothed empirical likelihood method for estimation in this context, and empirical likelihood based confidence intervals are obtained by means of the Wilks theorem. A bootstrap approach allows for the construction of confidence bands. The method is illustrated with data analysis and a simulation study.  相似文献   

9.
The Bayesian analysis based on the partial likelihood for Cox's proportional hazards model is frequently used because of its simplicity. The Bayesian partial likelihood approach is often justified by showing that it approximates the full Bayesian posterior of the regression coefficients with a diffuse prior on the baseline hazard function. This, however, may not be appropriate when ties exist among uncensored observations. In that case, the full Bayesian and Bayesian partial likelihood posteriors can be much different. In this paper, we propose a new Bayesian partial likelihood approach for many tied observations and justify its use.  相似文献   

10.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

11.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

12.
Methods for interval estimation and hypothesis testing about the ratio of two independent inverse Gaussian (IG) means based on the concept of generalized variable approach are proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are conceptually simple and are easy to use. Similar procedures are developed for constructing confidence intervals and hypothesis testing about the difference between two independent IG means. Monte Carlo comparison studies show that the results based on the generalized variable approach are as good as those based on the modified likelihood ratio test. The methods are illustrated using two examples.  相似文献   

13.
Sun W  Li H 《Lifetime data analysis》2004,10(3):229-245
The additive genetic gamma frailty model has been proposed for genetic linkage analysis for complex diseases to account for variable age of onset and possible covariates effects. To avoid ascertainment biases in parameter estimates, retrospective likelihood ratio tests are often used, which may result in loss of efficiency due to conditioning. This paper considers when the sibships are ascertained by having at least two affected sibs with the disease before a given age and provides two approaches for estimating the parameters in the additive gamma frailty model. One approach is based on the likelihood function conditioning on the ascertainment event, the other is based on maximizing a full ascertainment-adjusted likelihood. Explicit forms for these likelihood functions are derived. Simulation studies indicate that when the baseline hazard function can be correctly pre-specified, both approaches give accurate estimates of the model parameters. However, when the baseline hazard function has to be estimated simultaneously, only the ascertainment-adjusted likelihood method gives an unbiased estimate of the parameters. These results imply that the ascertainment-adjusted likelihood ratio test in the context of the additive genetic gamma frailty may be used for genetic linkage analysis.  相似文献   

14.
Models described as using quasi-likelihood (QL) are often using a different approach based on the normal likelihood, which I call pseudo-likelihood. The two approaches are described and contrasted, and an example is used to illustrate the advantages of the QL approach proper.  相似文献   

15.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

16.
Inference in generalized linear mixed models with multivariate random effects is often made cumbersome by the high-dimensional intractable integrals involved in the marginal likelihood. This article presents an inferential methodology based on the GEE approach. This method involves the approximations of the marginal likelihood and joint moments of the variables. It is also proposed an approximate Akaike and Bayesian information criterions based on the approximate marginal likelihood using the estimation of the parameters by the GEE approach. The different results are illustrated with a simulation study and with an analysis of real data from health-related quality of life.  相似文献   

17.
Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood‐based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood‐based methods are also presented. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
There exists a recent study where dynamic mixed‐effects regression models for count data have been extended to a semi‐parametric context. However, when one deals with other discrete data such as binary responses, the results based on count data models are not directly applicable. In this paper, we therefore begin with existing binary dynamic mixed models and generalise them to the semi‐parametric context. For inference, we use a new semi‐parametric conditional quasi‐likelihood (SCQL) approach for the estimation of the non‐parametric function involved in the semi‐parametric model, and a semi‐parametric generalised quasi‐likelihood (SGQL) approach for the estimation of the main regression, dynamic dependence and random effects variance parameters. A semi‐parametric maximum likelihood (SML) approach is also used as a comparison to the SGQL approach. The properties of the estimators are examined both asymptotically and empirically. More specifically, the consistency of the estimators is established and finite sample performances of the estimators are examined through an intensive simulation study.  相似文献   

19.
Estimating the parameters of multivariate mixed Poisson models is an important problem in image processing applications, especially for active imaging or astronomy. The classical maximum likelihood approach cannot be used for these models since the corresponding masses cannot be expressed in a simple closed form. This paper studies a maximum pairwise likelihood approach to estimate the parameters of multivariate mixed Poisson models when the mixing distribution is a multivariate Gamma distribution. The consistency and asymptotic normality of this estimator are derived. Simulations conducted on synthetic data illustrate these results and show that the proposed estimator outperforms classical estimators based on the method of moments. An application to change detection in low-flux images is also investigated.  相似文献   

20.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号