首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Likelihood-ratio tests (LRTs) are often used for inferences on one or more logistic regression coefficients. Conventionally, for given parameters of interest, the nuisance parameters of the likelihood function are replaced by their maximum likelihood estimates. The new function created is called the profile likelihood function, and is used for inference from LRT. In small samples, LRT based on the profile likelihood does not follow χ2 distribution. Several corrections have been proposed to improve LRT when used with small-sample data. Additionally, complete or quasi-complete separation is a common geometric feature for small-sample binary data. In this article, for small-sample binary data, we have derived explicitly the correction factors of LRT for models with and without separation, and proposed an algorithm to construct confidence intervals. We have investigated the performances of different LRT corrections, and the corresponding confidence intervals through simulations. Based on the simulation results, we propose an empirical rule of thumb on the use of these methods. Our simulation findings are also supported by real-world data.  相似文献   

2.
The non-parametric maximum likelihood estimator (NPMLE) of the distribution function with doubly censored data can be computed using the self-consistent algorithm (Turnbull, 1974). We extend the self-consistent algorithm to include a constraint on the NPMLE. We then show how to construct confidence intervals and test hypotheses based on the NPMLE via the empirical likelihood ratio. Finally, we present some numerical comparisons of the performance of the above method with another method that makes use of the influence functions.  相似文献   

3.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

4.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

5.
We implement profile empirical likelihood-based inference for censored median regression models. Inference for any specified subvector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set.  相似文献   

6.
This paper proposes the use of the integrated likelihood for inference on the mean effect in small‐sample meta‐analysis for continuous outcomes. The method eliminates the nuisance parameters given by variance components through integration with respect to a suitable weight function, with no need to estimate them. The integrated likelihood approach takes into proper account the estimation uncertainty of within‐study variances, thus providing confidence intervals with empirical coverage closer to nominal levels than standard likelihood methods. The improvement is remarkable when either (i) the number of studies is small to moderate or (ii) the small sample size of the studies does not allow to consider the within‐study variances as known, as common in applications. Moreover, the use of the integrated likelihood avoids numerical pitfalls related to the estimation of variance components which can affect alternative likelihood approaches. The proposed methodology is illustrated via simulation and applied to a meta‐analysis study in nutritional science.  相似文献   

7.
This article proposes the maximum likelihood estimates based on bare bones particle swarm optimization (BBPSO) algorithm for estimating the parameters of Weibull distribution with censored data, which is widely used in lifetime data analysis. This approach can produce more accuracy of the parameter estimation for the Weibull distribution. Additionally, the confidence intervals for the estimators are obtained. The simulation results show that the BB PSO algorithm outperforms the Newton–Raphson method in most cases in terms of bias, root mean square of errors, and coverage rate. Two examples are used to demonstrate the performance of the proposed approach. The results show that the maximum likelihood estimates via BBPSO algorithm perform well for estimating the Weibull parameters with censored data.  相似文献   

8.
Based on hybrid censored data, the problem of making statistical inference on parameters of a two parameter Burr Type XII distribution is taken up. The maximum likelihood estimates are developed for the unknown parameters using the EM algorithm. Fisher information matrix is obtained by applying missing value principle and is further utilized for constructing the approximate confidence intervals. Some Bayes estimates and the corresponding highest posterior density intervals of the unknown parameters are also obtained. Lindley’s approximation method and a Markov Chain Monte Carlo (MCMC) technique have been applied to evaluate these Bayes estimates. Further, MCMC samples are utilized to construct the highest posterior density intervals as well. A numerical comparison is made between proposed estimates in terms of their mean square error values and comments are given. Finally, two data sets are analyzed using proposed methods.  相似文献   

9.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

10.
Sun W  Li H 《Lifetime data analysis》2004,10(3):229-245
The additive genetic gamma frailty model has been proposed for genetic linkage analysis for complex diseases to account for variable age of onset and possible covariates effects. To avoid ascertainment biases in parameter estimates, retrospective likelihood ratio tests are often used, which may result in loss of efficiency due to conditioning. This paper considers when the sibships are ascertained by having at least two affected sibs with the disease before a given age and provides two approaches for estimating the parameters in the additive gamma frailty model. One approach is based on the likelihood function conditioning on the ascertainment event, the other is based on maximizing a full ascertainment-adjusted likelihood. Explicit forms for these likelihood functions are derived. Simulation studies indicate that when the baseline hazard function can be correctly pre-specified, both approaches give accurate estimates of the model parameters. However, when the baseline hazard function has to be estimated simultaneously, only the ascertainment-adjusted likelihood method gives an unbiased estimate of the parameters. These results imply that the ascertainment-adjusted likelihood ratio test in the context of the additive genetic gamma frailty may be used for genetic linkage analysis.  相似文献   

11.
Adaptive Type-II progressive censoring schemes have been shown to be useful in striking a balance between statistical estimation efficiency and the time spent on a life-testing experiment. In this article, some general statistical properties of an adaptive Type-II progressive censoring scheme are first investigated. A bias correction procedure is proposed to reduce the bias of the maximum likelihood estimators (MLEs). We then focus on the extreme value distributed lifetimes and derive the Fisher information matrix for the MLEs based on these properties. Four different approaches are proposed to construct confidence intervals for the parameters of the extreme value distribution. Performance of these methods is compared through an extensive Monte Carlo simulation.  相似文献   

12.
In spatial generalized linear mixed models (SGLMMs), statistical inference encounters problems, since random effects in the model imply high-dimensional integrals to calculate the marginal likelihood function. In this article, we temporarily treat parameters as random variables and express the marginal likelihood function as a posterior expectation. Hence, the marginal likelihood function is approximated using the obtained samples from the posterior density of the latent variables and parameters given the data. However, in this setting, misspecification of prior distribution of correlation function parameter and problems associated with convergence of Markov chain Monte Carlo (MCMC) methods could have an unpleasant influence on the likelihood approximation. To avoid these challenges, we utilize an empirical Bayes approach to estimate prior hyperparameters. We also use a computationally efficient hybrid algorithm by combining inverse Bayes formula (IBF) and Gibbs sampler procedures. A simulation study is conducted to assess the performance of our method. Finally, we illustrate the method applying a dataset of standard penetration test of soil in an area in south of Iran.  相似文献   

13.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

14.
Suppose some quantiles of the prior distribution of a nonnegative parameter θ are specified. Instead of eliciting just one prior density function, consider the class Γ of all the density functions compatible with the quantile specification. Given a likelihood function, find the posterior upper and lower bounds for the expected value of any real-valued function h(θ), as the density varies in Γ. Such a scheme agrees with a robust Bayesian viewpoint. Under mild regularity conditions about h(θ) and the likelihood, a procedure for finding bounds is derived and applied to an example, after transforming the given functional optimisation problems into finite-dimensional ones.  相似文献   

15.
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.  相似文献   

16.
This paper investigates on the problem of parameter estimation in statistical model when observations are intervals assumed to be related to underlying crisp realizations of a random sample. The proposed approach relies on the extension of likelihood function in interval setting. A maximum likelihood estimate of the parameter of interest may then be defined as a crisp value maximizing the generalized likelihood function. Using the expectation-maximization (EM) to solve such maximizing problem therefore derives the so-called interval-valued EM algorithm (IEM), which makes it possible to solve a wide range of statistical problems involving interval-valued data. To show the performance of IEM, the following two classical problems are illustrated: univariate normal mean and variance estimation from interval-valued samples, and multiple linear/nonlinear regression with crisp inputs and interval output.  相似文献   

17.
Three general algorithms that use different strategies are proposed for computing the maximum likelihood estimate of a semiparametric mixture model. They seek to maximize the likelihood function by, respectively, alternating the parameters, profiling the likelihood and modifying the support set. All three algorithms make a direct use of the recently proposed fast and stable constrained Newton method for computing the nonparametric maximum likelihood of a mixing distribution and employ additionally an optimization algorithm for unconstrained problems. The performance of the algorithms is numerically investigated and compared for solving the Neyman-Scott problem, overcoming overdispersion in logistic regression models and fitting two-level mixed effects logistic regression models. Satisfactory results have been obtained.  相似文献   

18.
Abstract.  The plug-in solution is usually not entirely adequate for computing prediction intervals, as their coverage probability may differ substantially from the nominal value. Prediction intervals with improved coverage probability can be defined by adjusting the plug-in ones, using rather complicated asymptotic procedures or suitable simulation techniques. Other approaches are based on the concept of predictive likelihood for a future random variable. The contribution of this paper is the definition of a relatively simple predictive distribution function giving improved prediction intervals. This distribution function is specified as a first-order unbiased modification of the plug-in predictive distribution function based on the constrained maximum likelihood estimator. Applications of the results to the Gaussian and the generalized extreme-value distributions are presented.  相似文献   

19.
The maximum likelihood and Bayesian approaches have been considered for the two-parameter generalized exponential distribution based on record values with the number of trials following the record values (inter-record times). The maximum likelihood estimates are obtained under the inverse sampling and the random sampling schemes. It is shown that the maximum likelihood estimator of the shape parameter converges in mean square to the true value when the scale parameter is known. The Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo methods due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The confidence intervals for the parameters are constructed based on asymptotic and Bayesian methods. The Bayes and the maximum likelihood estimators are compared in terms of the estimated risk by the Monte Carlo simulations. The comparison of the estimators based on the record values and the record values with their corresponding inter-record times are performed by using Monte Carlo simulations.  相似文献   

20.
In this paper, the estimation of parameters for a generalized inverted exponential distribution based on the progressively first-failure type-II right-censored sample is studied. An expectation–maximization (EM) algorithm is developed to obtain maximum likelihood estimates of unknown parameters as well as reliability and hazard functions. Using the missing value principle, the Fisher information matrix has been obtained for constructing asymptotic confidence intervals. An exact interval and an exact confidence region for the parameters are also constructed. Bayesian procedures based on Markov Chain Monte Carlo methods have been developed to approximate the posterior distribution of the parameters of interest and in addition to deduce the corresponding credible intervals. The performances of the maximum likelihood and Bayes estimators are compared in terms of their mean-squared errors through the simulation study. Furthermore, Bayes two-sample point and interval predictors are obtained when the future sample is ordinary order statistics. The squared error, linear-exponential and general entropy loss functions have been considered for obtaining the Bayes estimators and predictors. To illustrate the discussed procedures, a set of real data is analyzed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号