首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider estimation of unknown parameters and reliability characteristics of a Burr type-III distribution under progressive censoring. Predictive estimates for censored observations and the associated prediction intervals are also obtained. We derive maximum-likelihood estimators of unknown quantities using the EM algorithm and then also obtain the observed Fisher information matrix. We provide various Bayes estimators for unknown parameters under the squared error loss function. Highest posterior density and asymptotic intervals are also constructed. We evaluate performance of proposed methods using simulations. Finally, an illustrative example is presented in support of the methods discussed.  相似文献   

2.
We consider estimation of unknown parameters of a Burr XII distribution based on progressively Type I hybrid censored data. The maximum likelihood estimates are obtained using an expectation maximization algorithm. Asymptotic interval estimates are constructed from the Fisher information matrix. We obtain Bayes estimates under the squared error loss function using the Lindley method and Metropolis–Hastings algorithm. The predictive estimates of censored observations are obtained and the corresponding prediction intervals are also constructed. We compare the performance of the different methods using simulations. Two real datasets have been analyzed for illustrative purposes.  相似文献   

3.
Inverse Gaussian distribution has been used widely as a model in analysing lifetime data. In this regard, estimation of parameters of two-parameter (IG2) and three-parameter inverse Gaussian (IG3) distributions based on complete and censored samples has been discussed in the literature. In this paper, we develop estimation methods based on progressively Type-II censored samples from IG3 distribution. In particular, we use the EM-algorithm, as well as some other numerical methods for determining the maximum-likelihood estimates (MLEs) of the parameters. The asymptotic variances and covariances of the MLEs from the EM-algorithm are derived by using the missing information principle. We also consider some simplified alternative estimators. The inferential methods developed are then illustrated with some numerical examples. We also discuss the interval estimation of the parameters based on the large-sample theory and examine the true coverage probabilities of these confidence intervals in case of small samples by means of Monte Carlo simulations.  相似文献   

4.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

5.
In this study we investigate the problem of estimation and testing of hypotheses in multivariate linear regression models when the errors involved are assumed to be non-normally distributed. We consider the class of heavy-tailed distributions for this purpose. Although our method is applicable for any distribution in this class, we take the multivariate t-distribution for illustration. This distribution has applications in many fields of applied research such as Economics, Business, and Finance. For estimation purpose, we use the modified maximum likelihood method in order to get the so-called modified maximum likelihood estimates that are obtained in a closed form. We show that these estimates are substantially more efficient than least-square estimates. They are also found to be robust to reasonable deviations from the assumed distribution and also many data anomalies such as the presence of outliers in the sample, etc. We further provide test statistics for testing the relevant hypothesis regarding the regression coefficients.  相似文献   

6.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

7.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   

8.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

9.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

10.
Based on progressively Type-I interval censored sample, the problem of estimating unknown parameters of a two parameter generalized half-normal(GHN) distribution is considered. Different methods of estimation are discussed. They include the maximum likelihood estimation, midpoint approximation method, approximate maximum likelihood estimation, method of moments, and estimation based on probability plot. Several Bayesian estimates with respect to different symmetric and asymmetric loss functions such as squared error, LINEX, and general entropy is calculated. The Lindley’s approximation method is applied to determine Bayesian estimates. Monte Carlo simulations are performed to compare the performances of the different methods. Finally, analysis is also carried out for a real dataset.  相似文献   

11.
We consider the estimation of the 90 and 95 percentiles of a normal distribution and also the construction of one-sided 90% and 95% -normal ranges. Three methods are proposed -the sample percentile method, and two based on kernel estimates of the density function using Fryer's method and the leaving-one-out method for choosing a smoothing parameter.

A simulation study compares the methods in terms of bias, variance and mean square error of the population percentile estimates and of the eovers of the consequent normal ranges.  相似文献   

12.
Two methods of estimation for the parameters of an AR(1) process which are based on a non-linear least-squares approach are presented. On the basis of some simulation results they are compared with two maximum likelihood estimates and their relative merits are discussed.  相似文献   

13.
In this paper, we consider the estimation reliability in multicomponent stress-strength (MSS) model when both the stress and strengths are drawn from Topp-Leone (TL) distribution. The maximum likelihood (ML) and Bayesian methods are used in the estimation procedure. Bayesian estimates are obtained by using Lindley’s approximation and Gibbs sampling methods, since they cannot be obtained in explicit form in the context of TL. The asymptotic confidence intervals are constructed based on the ML estimators. The Bayesian credible intervals are also constructed using Gibbs sampling. The reliability estimates are compared via an extensive Monte-Carlo simulation study. Finally, a real data set is analysed for illustrative purposes.  相似文献   

14.
Some work has been done in the past on the estimation for the three-parameter gamma distribution based on complete and censored samples. In this paper, we develop estimation methods based on progressively Type-II censored samples from a three-parameter gamma distribution. In particular, we develop some iterative methods for the determination of the maximum likelihood estimates (MLEs) of all three parameters. It is shown that the proposed iterative scheme converges to the MLEs. In this context, we propose another method of estimation which is based on missing information principle and moment estimators. Simple alternatives to the above two methods are also suggested. The proposed estimation methods are then illustrated with a numerical example. We also consider the interval estimation based on large-sample theory and examine the actual coverage probabilities of these confidence intervals in case of small samples using a Monte Carlo simulation study.  相似文献   

15.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   

16.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   

17.
The beauty of the Bayesian approach is its ability to structure complicated models, inferential goals and analyses. To take full advantage of it, methods should be linked to an inferential goal via a loss function. For example, in the two-stage, compound sampling model the posterior means are optimal under squared error loss. However, they can perform poorly in estimating the histogram of the parameters or in ranking them. 'Triple-goal' estimates are motivated by the desire to have a set of estimates that produce good ranks, a good parameter histogram and good co-ordinate-specific estimates. No set of estimates can simultaneously optimize these three goals and we seek a set that strikes an effective trade-off. We evaluate and compare three candidate approaches: the posterior means, the constrained Bayes estimates of Louis and Ghosh, and a new approach that optimizes estimation of the histogram and the ranks. Mathematical and simulation-based analyses support the superiority of the new approach and document its excellent performance for the three inferential goals.  相似文献   

18.
We consider the problem of making inferences about extreme values from a sample. The underlying model distribution is the generalized extreme-value (GEV) distribution, and our interest is in estimating the parameters and quantiles of the distribution robustly. In doing this we find estimates for the GEV parameters based on that part of the data which is well fitted by a GEV distribution. The robust procedure will assign weights between 0 and 1 to each data point. A weight near 0 indicates that the data point is not well modelled by the GEV distribution which fits the points with weights at or near 1. On the basis of these weights we are able to assess the validity of a GEV model for our data. It is important that the observations with low weights be carefully assessed to determine whether diey are valid observations or not. If they are, we must examine whether our data could be generated by a mixture of GEV distributions or whether some other process is involved in generating the data. This process will require careful consideration of die subject matter area which led to the data. The robust estimation techniques are based on optimal B-robust estimates. Their performance is compared to the probability-weighted moment estimates of Hosking et al. (1985) in both simulated and real data.  相似文献   

19.
In this work, we study the maximum likelihood (ML) estimation problem for the parameters of the two-piece (TP) distribution based on the scale mixtures of normal (SMN) distributions. This is a family of skewed distributions that also includes the scales mixtures of normal class, and is flexible enough for modeling symmetric and asymmetric data. The ML estimates of the proposed model parameters are obtained via an expectation-maximization (EM)-type algorithm.  相似文献   

20.
In this paper we consider the problem of estimating the parameters of the generalized Pareto distribution. Both the method of moments and probability-weighted moments do not guarantee that their respective estimates will be consistent with the observed data. We present simple programs to predict the probability of obtaining such nonfeasible estimates. Our estimation techniques are based on results from intensive simulations and the successful modelling of the lower tail of the distribution of the upper bound of the support. More simulations are performed to validate the new procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号