首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
A progressive hybrid censoring scheme is a mixture of type-I and type-II progressive censoring schemes. In this paper, we mainly consider the analysis of progressive type-II hybrid-censored data when the lifetime distribution of the individual item is the normal and extreme value distributions. Since the maximum likelihood estimators (MLEs) of these parameters cannot be obtained in the closed form, we propose to use the expectation and maximization (EM) algorithm to compute the MLEs. Also, the Newton–Raphson method is used to estimate the model parameters. The asymptotic variance–covariance matrix of the MLEs under EM framework is obtained by Fisher information matrix using the missing information and asymptotic confidence intervals for the parameters are then constructed. This study will end up with comparing the two methods of estimation and the asymptotic confidence intervals of coverage probabilities corresponding to the missing information principle and the observed information matrix through a simulation study, illustrated examples and real data analysis.  相似文献   

2.
We discuss the maximum likelihood estimates (MLEs) of the parameters of the log-gamma distribution based on progressively Type-II censored samples. We use the profile likelihood approach to tackle the problem of the estimation of the shape parameter κ. We derive approximate maximum likelihood estimators of the parameters μ and σ and use them as initial values in the determination of the MLEs through the Newton–Raphson method. Next, we discuss the EM algorithm and propose a modified EM algorithm for the determination of the MLEs. A simulation study is conducted to evaluate the bias and mean square error of these estimators and examine their behavior as the progressive censoring scheme and the shape parameter vary. We also discuss the interval estimation of the parameters μ and σ and show that the intervals based on the asymptotic normality of MLEs have very poor probability coverages for small values of m. Finally, we present two examples to illustrate all the methods of inference discussed in this paper.  相似文献   

3.
This article considers inference for the log-normal distribution based on progressive Type I interval censored data by both frequentist and Bayesian methods. First, the maximum likelihood estimates (MLEs) of the unknown model parameters are computed by expectation-maximization (EM) algorithm. The asymptotic standard errors (ASEs) of the MLEs are obtained by applying the missing information principle. Next, the Bayes’ estimates of the model parameters are obtained by Gibbs sampling method under both symmetric and asymmetric loss functions. The Gibbs sampling scheme is facilitated by adopting a similar data augmentation scheme as in EM algorithm. The performance of the MLEs and various Bayesian point estimates is judged via a simulation study. A real dataset is analyzed for the purpose of illustration.  相似文献   

4.
The maximum likelihood estimates (MLEs) of the parameters of a two-parameter lognormal distribution with left truncation and right censoring are developed through the Expectation Maximization (EM) algorithm. For comparative purpose, the MLEs are also obtained by the Newton–Raphson method. The asymptotic variance-covariance matrix of the MLEs is obtained by using the missing information principle, under the EM framework. Then, using asymptotic normality of the MLEs, asymptotic confidence intervals for the parameters are constructed. Asymptotic confidence intervals are also obtained using the estimated variance of the MLEs by the observed information matrix, and by using parametric bootstrap technique. Different confidence intervals are then compared in terms of coverage probabilities, through a Monte Carlo simulation study. A prediction problem concerning the future lifetime of a right censored unit is also considered. A numerical example is given to illustrate all the inferential methods developed here.  相似文献   

5.
In this article, we consider a competing cause scenario and assume the wider family of Conway–Maxwell–Poisson (COM–Poisson) distribution to model the number of competing causes. Assuming the type of the data to be interval censored, the main contribution is in developing the steps of the expectation maximization (EM) algorithm to determine the maximum likelihood estimates (MLEs) of the model parameters. A profile likelihood approach within the EM framework is proposed to estimate the COM–Poisson shape parameter. An extensive simulation study is conducted to evaluate the performance of the proposed EM algorithm. Model selection within the wider class of COM–Poisson distribution is carried out using likelihood ratio test and information-based criteria. A study to demonstrate the effect of model mis-specification is also carried out. Finally, the proposed estimation method is applied to a data on smoking cessation and a detailed analysis of the obtained results is presented.  相似文献   

6.
Based on progressive Type-I hybrid censored data, statistical analysis in constant-stress accelerated life test (CS-ALT) for generalized exponential (GE) distribution is discussed. The maximum likelihood estimates (MLEs) of the parameters and the reliability function are obtained with EM algorithm, as well as the observed Fisher information matrix, the asymptotic variance-covariance matrix of the MLEs, and the asymptotic unbiased estimate (AUE) of the scale parameter. Confidence intervals (CIs) for the parameters are derived using asymptotic normality of MLEs and percentile bootstrap (Boot-p) method. Finally, the point estimates and interval estimates of the parameters are compared separately through the Monte-Carlo method.  相似文献   

7.
This paper gives a comparative study of the K-means algorithm and the mixture model (MM) method for clustering normal data. The EM algorithm is used to compute the maximum likelihood estimators (MLEs) of the parameters of the MM model. These parameters include mixing proportions, which may be thought of as the prior probabilities of different clusters; the maximum posterior (Bayes) rule is used for clustering. Hence, asymptotically the MM method approaches the Bayes rule for known parameters, which is optimal in terms of minimizing the expected misclassification rate (EMCR).  相似文献   

8.
Estimators derived from the expectation‐maximization (EM) algorithm are not robust since they are based on the maximization of the likelihood function. We propose an iterative proximal‐point algorithm based on the EM algorithm to minimize a divergence criterion between a mixture model and the unknown distribution that generates the data. The algorithm estimates in each iteration the proportions and the parameters of the mixture components in two separate steps. Resulting estimators are generally robust against outliers and misspecification of the model. Convergence properties of our algorithm are studied. The convergence of the introduced algorithm is discussed on a two‐component Weibull mixture entailing a condition on the initialization of the EM algorithm in order for the latter to converge. Simulations on Gaussian and Weibull mixture models using different statistical divergences are provided to confirm the validity of our work and the robustness of the resulting estimators against outliers in comparison to the EM algorithm. An application to a dataset of velocities of galaxies is also presented. The Canadian Journal of Statistics 47: 392–408; 2019 © 2019 Statistical Society of Canada  相似文献   

9.
In this article, we propose mixtures of skew Laplace normal (SLN) distributions to model both skewness and heavy-tailedness in the neous data set as an alternative to mixtures of skew Student-t-normal (STN) distributions. We give the expectation–maximization (EM) algorithm to obtain the maximum likelihood (ML) estimators for the parameters of interest. We also analyze the mixture regression model based on the SLN distribution and provide the ML estimators of the parameters using the EM algorithm. The performance of the proposed mixture model is illustrated by a simulation study and two real data examples.  相似文献   

10.
We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.  相似文献   

11.
This article focuses on the parameter estimation of experimental items/units from Weibull Poisson Model under progressive type-II censoring with binomial removals (PT-II CBRs). The expectation–maximization algorithm has been used for maximum likelihood estimators (MLEs). The MLEs and Bayes estimators have been obtained under symmetric and asymmetric loss functions. Performance of competitive estimators have been studied through their simulated risks. One sample Bayes prediction and expected experiment time have also been studied. Furthermore, through real bladder cancer data set, suitability of considered model and proposed methodology have been illustrated.  相似文献   

12.
For multivariate normal data with non-monotone (i.e. arbitrary) missing data patterns, lattice conditional independence (LCI) models determined by the observed data patterns can be used to obtain closed-form MLEs (Andersson and Perlman, 1991, 1993). In this paper, three procedures — LCI models, the EM algorithm, and the complete-data method — are compared by means of a Monte Carlo experiment. When the LCI model is accepted by the LR test, the LCI estimate is more efficient than those based on the EM algorithm and the complete-data method. When the LCI model is not accepted, the LCI estimate may lose efficiency but still may be more efficient than the EM estimate if the observed data is sparse. When the LCI model appears too restrictive, it may be possible to obtain a less restrictive LCI model by.discarding only a small portion of the incomplete observations. LCI models appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain.  相似文献   

13.
This article focuses on data analyses under the scenario of missing at random within discrete-time Markov chain models. The naive method, nonlinear (NL) method, and Expectation-Maximization (EM) algorithm are discussed. We extend the NL method into a Bayesian framework, using an adjusted rejection algorithm to sample the posterior distribution, and estimating the transition probabilities with a Monte Carlo algorithm. We compare the Bayesian nonlinear (BNL) method with the naive method and the EM algorithm with various missing rates, and comprehensively evaluate estimators in terms of biases, variances, mean square errors, and coverage probabilities (CPs). Our simulation results show that the EM algorithm usually offers smallest variances but with poorest CP, while the BNL method has smaller variances and better/similar CP as compared to the naive method. When the missing rate is low (about 9%, MAR), the three methods are comparable. Whereas when the missing rate is high (about 25%, MAR), overall, the BNL method performs slightly but consistently better than the naive method regarding variances and CP. Data from a longitudinal study of stress level among caregivers of individuals with Alzheimer’s disease is used to illustrate these methods.  相似文献   

14.
We propose an iterative method of estimation for discrete missing data problems that is conceptually different from the Expectation–Maximization (EM) algorithm and that does not in general yield the observed data maximum likelihood estimate (MLE). The proposed approach is based conceptually upon weighting the set of possible complete-data MLEs. Its implementation avoids the expectation step of EM, which can sometimes be problematic. In the simple case of Bernoulli trials missing completely at random, the iterations of the proposed algorithm are equivalent to the EM iterations. For a familiar genetics-oriented multinomial problem with missing count data and for the motivating example with epidemiologic applications that involves a mixture of a left censored normal distribution with a point mass at zero, we investigate the finite sample performance of the proposed estimator and find it to be competitive with that of the MLE. We give some intuitive justification for the method, and we explore an interesting connection between our algorithm and multiple imputation in order to suggest an approach for estimating standard errors.  相似文献   

15.
In lifetime analysis of electric transformers, the maximum likelihood estimation has been proposed with the EM algorithm. However, it is not clear whether the EM algorithm offers a better solution compared to the simpler Newton-Raphson (NR) algorithm. In this article, the first objective is a systematic comparison of the EM algorithm with the NR algorithm in terms of convergence performance. The second objective is to examine the performance of Akaike's information criterion (AIC) for selecting a suitable distribution among candidate models via simulations. These methods are illustrated through the electric power transformer dataset.  相似文献   

16.
This article aims to put forward a new method to solve the linear quantile regression problems based on EM algorithm using a location-scale mixture of the asymmetric Laplace error distribution. A closed form of the estimator of the unknown parameter vector β based on EM algorithm, is obtained. In addition, some simulations are conducted to illustrate the performance of the proposed method. Simulation results demonstrate that the proposed algorithm performs well. Finally, the classical Engel data is fitted and the Bootstrap confidence intervals for estimators are provided.  相似文献   

17.
In most applications, the parameters of a mixture of linear regression models are estimated by maximum likelihood using the expectation maximization (EM) algorithm. In this article, we propose the comparison of three algorithms to compute maximum likelihood estimates of the parameters of these models: the EM algorithm, the classification EM algorithm and the stochastic EM algorithm. The comparison of the three procedures was done through a simulation study of the performance (computational effort, statistical properties of estimators and goodness of fit) of these approaches on simulated data sets.

Simulation results show that the choice of the approach depends essentially on the configuration of the true regression lines and the initialization of the algorithms.  相似文献   

18.
In this paper, we study the estimation and inference for a class of semiparametric mixtures of partially linear models. We prove that the proposed models are identifiable under mild conditions, and then give a PL–EM algorithm estimation procedure based on profile likelihood. The asymptotic properties for the resulting estimators and the ascent property of the PL–EM algorithm are investigated. Furthermore, we develop a test statistic for testing whether the non parametric component has a linear structure. Monte Carlo simulations and a real data application highlight the interest of the proposed procedures.  相似文献   

19.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

20.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号