首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we consider the problem of estimating the scale parameter of the inverse Rayleigh distribution based on general progressively Type-II censored samples and progressively Type-II censored samples. The pivotal quantity method is used to derive the estimator of the scale parameter. Besides, considering that the maximum likelihood estimator is tough to obtain for this distribution, we derive an explicit estimator of the scale parameter by approximating the likelihood equation with Taylor expansion. The interval estimation is also studied based on pivotal inference. Then we conduct Monte Carlo simulations and compare the performance of different estimators. We demonstrate that the pivotal inference is simpler and more effective. The further application of the pivotal quantity method is also discussed theoretically. Finally, two real data sets are analyzed using our methods.  相似文献   

2.
The exact inference and prediction intervals for the K-sample exponential scale parameter under doubly Type-II censored samples are derived using an algorithm of Huffer and Lin [Huffer, F.W. and Lin, C.T., 2001, Computing the joint distribution of general linear combinations of spacings or exponen-tial variates. Statistica Sinica, 11, 1141–1157.]. This approach provides a simple way to determine the exact percentage points of the pivotal quantity based on the best linear unbiased estimator in order to develop exact inference for the scale parameter as well as to construct exact prediction intervals for failure times unobserved in the ith sample. Similarly, exact prediction intervals for failure times of units from a future sample can also be easily obtained.  相似文献   

3.
In this paper, we make use of an algorithm of Huffer & Lin (2001) in order to develop exact prediction intervals for failure times from one-parameter and two- parameter exponential distributions based on doubly Type-II censored samples. We show that this method yields the same results as those of Lawless (1971, 1977) and Like μ(1974) in the case when the available sample is Type-II right censored. We present a computational algorithm for the determination of the exact percentage points of the pivotal quantities used in the construction of these prediction intervals. We also present some tables of these percentage points for the prediction of the ℓth order statistic in a sample of size n for both one- and two-parameter exponential distributions, assuming that the available sample is doubly Type-II censored. Finally, we present two examples to illustrate the methods of inference developed here.  相似文献   

4.
We develop an exact inference for the location and the scale parameters of the two-exponential distribution and the Pareto distribution based on their maximum-likelihood estimators from the doubly Type-II and the progressive Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. We also provide a bootstrap method for constructing a confidence interval. Some simulation studies are carried out to assess their performances. Using the exact distribution of the scale parameter, we establish an acceptance sampling procedure based on the lifetime of the unit. Some numerical results are tabulated for the illustration. One biometrical example is also given to illustrate the proposed methods.  相似文献   

5.
In this paper, we make use of an algorithm of Huffer and Lin (2000) in order to develop exact interval estimation for the scale parameter to of an exponential distribution based on doubly Type-II censored samples. We also evaluate the accuracy of a chi-square approximation proposed by Balakrishnan and Gupta (1998). We present the MAPLE program for the determination of the exact percentage points of the pivotal quantity based on the best linear unbiased estimator. Finally, we present a couple of examples to illustrate the method of inference developed here.  相似文献   

6.
In this article, we provide some suitable pivotal quantities for constructing prediction intervals for the jth future ordered observation from the two-parameter Weibull distribution based on censored samples. Our method is more general in the sense that it can be applied to any data scheme. We present a simulation of our method to analyze its performance. Two illustrative examples are also included. For further study, our method is easily applied to other location and scale family distributions.  相似文献   

7.

In this paper, we make use of an algorithm of Huffer and Lin (2001) in order to develop exact interval estimation for the location and scale parameters of an exponential distribution based on general progressively Type-II censored samples. The exact prediction intervals for failure times of the items censored at the last observation are also presented for one-parameter and two-parameter exponential distributions. Finally, we give two examples to illustrate the methods of inference developed here.  相似文献   

8.
In this paper we address estimation and prediction problems for extreme value distributions under the assumption that the only available data are the record values. We provide some properties and pivotal quantities, and derive unbiased estimators for the location and rate parameters based on these properties and pivotal quantities. In addition, we discuss mean-squared errors of the proposed estimators and exact confidence intervals for the rate parameter. In Bayesian inference, we develop objective Bayesian analysis by deriving non informative priors such as the Jeffrey, reference, and probability matching priors for the location and rate parameters. We examine the validity of the proposed methods through Monte Carlo simulations for various record values of size and present a real data set for illustration purposes.  相似文献   

9.
The maximum likelihood (ML) estimation of the location and scale parameters of an exponential distribution based on singly and doubly censored samples is given. When the sample is multiply censored (some middle observations being censored), however, the ML method does not admit explicit solutions. In this case we present a simple approximation to the likelihood equation and derive explicit estimators which are linear functions of order statistics. Finally, we present some examples to illustrate this method of estimation.  相似文献   

10.
In this paper, we discuss the problem of predicting times to the latent failures of units censored in multiple stages in a progressively Type-II censored competing risks model. It is assumed that the lifetime distribution of the latent failure times are independent and exponential-distributed with the different scale parameters. Several classical point predictors such as the maximum likelihood predictor, the best unbiased predictor, the best linear unbiased predictor, the median unbiased predictor and the conditional median predictor are obtained. The Bayesian point predictors are derived under squared error loss criterion. Moreover, the point estimators of the unknown parameters are obtained using the observed data and different point predictors of the latent failure times. Finally, Monte-Carlo simulations are carried out to compare the performances of the different methods of prediction and estimation and one real data is used to illustrate the proposed procedures.  相似文献   

11.
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case.  相似文献   

12.
The extreme value distribution has been extensively used to model natural phenomena such as rainfall and floods, and also in modeling lifetimes and material strengths. Maximum likelihood estimation (MLE) for the parameters of the extreme value distribution leads to likelihood equations that have to be solved numerically, even when the complete sample is available. In this paper, we discuss point and interval estimation based on progressively Type-II censored samples. Through an approximation in the likelihood equations, we obtain explicit estimators which are approximations to the MLEs. Using these approximate estimators as starting values, we obtain the MLEs using an iterative method and examine numerically their bias and mean squared error. The approximate estimators compare quite favorably to the MLEs in terms of both bias and efficiency. Results of the simulation study, however, show that the probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are unsatisfactory for both these estimators and particularly so when the effective sample size is small. We, therefore, suggest the use of unconditional simulated percentage points of these pivotal quantities for the construction of confidence intervals. The results are presented for a wide range of sample sizes and different progressive censoring schemes. We conclude with an illustrative example.  相似文献   

13.
The likelihood equations based on a progressively Type II censored sample from a Type I generalized logistic distribution do not provide explicit solutions for the location and scale parameters. We present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We examine numerically the bias and variance of these estimators and show that these estimators are as efficient as the maximum likelihood estimators (MLEs). The probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are shown to be unsatisfactory, especially when the effective sample size is small. Therefore we suggest using unconditional simulated percentage points of these pivotal quantities for the construction of confidence intervals. A wide range of sample sizes and progressive censoring schemes have been considered in this study. Finally, we present a numerical example to illustrate the methods of inference developed here.  相似文献   

14.
For the complete sample and the right Type II censored sample, Chen [Joint confidence region for the parameters of Pareto distribution. Metrika 44 (1996), pp. 191–197] proposed the interval estimation of the parameter θ and the joint confidence region of the two parameters of Pareto distribution. This paper proposed two methods to construct the confidence region of the two parameters of the Pareto distribution for the progressive Type II censored sample. A simulation study comparing the performance of the two methods is done and concludes that Method 1 is superior to Method 2 by obtaining a smaller confidence area. The interval estimation of parameter ν is also given under progressive Type II censoring. In addition, the predictive intervals of the future observation and the ratio of the two future consecutive failure times based on the progressive Type II censored sample are also proposed. Finally, one example is given to illustrate all interval estimations in this paper.  相似文献   

15.
Log-location-scale distributions are widely used parametric models that have fundamental importance in both parametric and semiparametric frameworks. The likelihood equations based on a Type II censored sample from location-scale distributions do not provide explicit solutions for the para-meters. Statistical software is widely available and is based on iterative methods (such as, Newton Raphson Algorithm, EM algorithm etc.), which require starting values near the global maximum. There are also many situations that the specialized software does not handle. This paper provides a method for determining explicit estimators for the location and scale parameters by approximating the likelihood function, where the method does not require any starting values. The performance of the proposed approximate method for the Weibull distribution and Log-Logistic distributions is compared with those based on iterative methods through the use of simulation studies for a wide range of sample size and Type II censoring schemes. Here we also examine the probability coverages of the pivotal quantities based on asymptotic normality. In addition, two examples are given.  相似文献   

16.
The scaled (two-parameter) Type I generalized logistic distribution (GLD) is considered with the known shape parameter. The ML method does not yield an explicit estimator for the scale parameter even in complete samples. In this article, we therefore construct a new linear estimator for scale parameter, based on complete and doubly Type-II censored samples, by making linear approximations to the intractable terms of the likelihood equation using least-squares (LS) method, a new approach of linearization. We call this as linear approximate maximum likelihood estimator (LAMLE). We also construct LAMLE based on Taylor series method of linear approximation and found that this estimator is slightly biased than that based on the LS method. A Monte Carlo simulation is used to investigate the performance of LAMLE and found that it is almost as efficient as MLE, though biased than MLE. We also compare unbiased LAMLE with BLUE based on the exact variances of the estimators and interestingly this new unbiased LAMLE is found just as efficient as the BLUE in both complete and Type-II censored samples. Since MLE is known as asymptotically unbiased, in large samples we compare unbiased LAMLE with MLE and found that this estimator is almost as efficient as MLE. We have also discussed interval estimation of the scale parameter from complete and Type-II censored samples. Finally, we present some numerical examples to illustrate the construction of the new estimators developed here.  相似文献   

17.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

18.
ABSTRACT

In this paper, we consider some problems of point estimation and point prediction when the competing risks data from a class of exponential distribution are progressive type-I interval censored. The maximum likelihood estimation and mid-point approximation method are proposed for the estimations of parameters. Also several point predictors of censored units such as the maximum likelihood predictor, the best unbiased predictor and the conditional median predictor are obtained. The methods discussed here are applied when the lifetime distributions of the latent failure times are independent and Weibull-distributed. Finally a simulation study is given by using Monte-Carlo simulations to compare the performances of the different methods and one data analysis has been presented for illustrative purposes.  相似文献   

19.
In the present article, we have studied the estimation of entropy, that is, a function of scale parameter lnσ of an exponential distribution based on doubly censored sample when the location parameter is restricted to positive real line. The estimation problem is studied under a general class of bowl-shaped non monotone location invariant loss functions. It is established that the best affine equivariant estimator (BAEE) is inadmissible by deriving an improved estimator. This estimator is non-smooth. Further, we have obtained a smooth improved estimator. A class of estimators is considered and sufficient conditions are derived under which these estimators improve upon the BAEE. In particular, using these results we have obtained the improved estimators for the squared error and the linex loss functions. Finally, we have compared the risk performance of the proposed estimators numerically. One data analysis has been performed for illustrative purposes.  相似文献   

20.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号