首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
This paper presents methods of estimation of the parameters and acceleration factor for Nadarajah–Haghighi distribution based on constant-stress partially accelerated life tests. Based on progressive Type-II censoring, Maximum likelihood and Bayes estimates of the model parameters and acceleration factor are established, respectively. In addition, approximate confidence interval are constructed via asymptotic variance and covariance matrix, and Bayesian credible intervals are obtained based on importance sampling procedure. For comparison purpose, alternative bootstrap confidence intervals for unknown parameters and acceleration factor are also presented. Finally, extensive simulation studies are conducted for investigating the performance of the our results, and two data sets are analyzed to show the applicabilities of the proposed methods.  相似文献   

2.
The problem of selecting the Bernoulli population which has the highest "success" probability is considered. It has been noted in several articles that the probability of a correct selection is the same, uniformly in the Bernoulli p-vector (P1,P2,….,Pk), for two or more different selection procedures. We give a general theorem which explains this phenomenon.

An application of particular interest arises when "strong" curtailment of a single-stage procedure (as introduced by Bechhofer and Kulkarni (1982a) )is employed; the corresponding result for "weak" curtailment of a single-stage procedure needs no proof. The use of strong curtailment in place of weak curtailment requires no more (and usually many less) observations to achieve the same.  相似文献   

3.
In the problem of selecting the best of k populations, Olkin, Sobel, and Tong (1976) have introduced the idea of estimating the probability of correct selection. In an attempt to improve on their estimator we consider anempirical Bayes approach. We compare the two estimators via analytic results and a simulation study.  相似文献   

4.
The paper aims to select a suitable prior for the Bayesian analysis of the two-component mixture of the Topp Leone model under doubly censored samples and left censored samples for the first component and right censored samples for the second component. The posterior analysis has been carried out under the assumption of a class of informative and noninformative priors using a couple of loss functions. The comparison among the different Bayes estimators has been made under a simulation study and a real life example. The model comparison criterion has been used to select a suitable prior for the Bayesian analysis. The hazard rate of the Topp Leone mixture model has been compared for a range of parametric values.  相似文献   

5.
Surles and Padgett [Inference for reliability and stress–strength for a scaled Burr type X distribution. Lifetime Data Anal. 2001;7:187–200] introduced a two-parameter Burr-type X distribution, which can be described as a generalized Rayleigh distribution. In this paper, we consider the estimation of the stress–strength parameter R=P[Y<X], when X and Y are both three-parameter generalized Rayleigh distributions with the same scale and locations parameters but different shape parameters. It is assumed that they are independently distributed. It is observed that the maximum-likelihood estimators (MLEs) do not exist, and we propose a modified MLE of R. We obtain the asymptotic distribution of the modified MLE of R, and it can be used to construct the asymptotic confidence interval of R. We also propose the Bayes estimate of R and the construction of the associated credible interval based on importance sampling technique. Analysis of two real data sets, (i) simulated and (ii) real, have been performed for illustrative purposes.  相似文献   

6.
Confidence statements about location (or scale) parameters associated with K populations, which may be used in making selection decisions about those populations, are investigated. When a subset of fixed size t is selected from the K populations a lower bound is obtained for the minimum selected parameter as a function of the maximum non-selected parameter. Tables are produced for the normal means case when the variance is common but unknown. It is pointed out that these tables may be used to find confidence intervals discussed by Hsu (1984  相似文献   

7.
This paper deals with the estimation of the stress–strength parameter R=P(Y<X), when X and Y are independent exponential random variables, and the data obtained from both distributions are progressively type-II censored. The uniformly minimum variance unbiased estimator and the maximum-likelihood estimator (MLE) are obtained for the stress–strength parameter. Based on the exact distribution of the MLE of R, an exact confidence interval of R has been obtained. Bayes estimate of R and the associated credible interval are also obtained under the assumption of independent inverse gamma priors. An extensive computer simulation is used to compare the performances of the proposed estimators. One data analysis has been performed for illustrative purpose.  相似文献   

8.
In this article, we consider the problem of estimation of the stress–strength parameter δ?=?P(Y?<?X) based on progressively first-failure-censored samples, when X and Y both follow two-parameter generalized inverted exponential distribution with different and unknown shape and scale parameters. The maximum likelihood estimator of δ and its asymptotic confidence interval based on observed Fisher information are constructed. Two parametric bootstrap boot-p and boot-t confidence intervals are proposed. We also apply Markov Chain Monte Carlo techniques to carry out Bayes estimation procedures. Bayes estimate under squared error loss function and the HPD credible interval of δ are obtained using informative and non-informative priors. A Monte Carlo simulation study is carried out for comparing the proposed methods of estimation. Finally, the methods developed are illustrated with a couple of real data examples.  相似文献   

9.
Ranking objects by a panel of judges is commonly used in situations where objective attributes cannot easily be measured or interpreted. Under the assumption that the judge independently arrive at their rankings by making pair wise comparisons among the objects in an attempt to reproduce a common baseline ranking w0, we develop and explore confidence regions and Bayesian highest posterior density credible regions for w0 with emphasis on very small sample sizes.  相似文献   

10.
Block and Basu bivariate exponential distribution is one of the most popular absolute continuous bivariate distributions. Recently, Kundu and Gupta [A class of absolute continuous bivariate distributions. Statist Methodol. 2010;7:464–477] introduced Block and Basu bivariate Weibull (BBBW) distribution, which is a generalization of the Block and Basu bivariate exponential distribution, and provided the maximum likelihood estimators using EM algorithm. In this paper, we consider the Bayesian inference of the unknown parameters of the BBBW distribution. The Bayes estimators are obtained with respect to the squared error loss function, and the prior distributions allow for prior dependence among the unknown parameters. Prior independence also can be obtained as a special case. It is observed that the Bayes estimators of the unknown parameters cannot be obtained in explicit forms. We propose to use the importance sampling technique to compute the Bayes estimates and also to construct the associated highest posterior density credible intervals. The analysis of two data sets has been performed for illustrative purposes. The performances of the proposed estimators are quite satisfactory. Finally, we generalize the results for the multivariate case.  相似文献   

11.
This paper proposes the use of tne likelihood ratio statistic in choosing between a Neibull or gamma model, values of the probability of correct seeiection are obtained by Monte Carlo simulation. This method provides some basis for decision even when the sample size is small. The technique is applied to four sets of data from the literature.  相似文献   

12.
Let X1, …,Xn be a random sample from a normal distribution with mean θ and variance σ2 The problem is to estimate θ with loss function L(θ,e) = v(e-θ) where v(x) = b(exp(ax)-ax-l) and where a, b are constants with b>0, a¦0. Zellner (1986), showed that [Xbar] ? σ2a/2n dominates [Xbar] and hence [Xbar] is inadmissible. The question of what values of c and d render c[Xbar]+ d admissible is studied here.  相似文献   

13.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

14.
We consider the problem of maximum likelihood estimation of the parameters of the bxvariate binomial distribution, In the statistical literature, this problem is solved when the observed sample is available in the form of a 2x2 contingency table, that is, with all four cell fre quencies given,, The present paper provides a solution for this problem when only the marginal totals of the 2x2 table are observed, which is the natural set-up in a bivariate sampling situation.. Thus, based on a sample [(Xi,Yi:), i = 1, …, k] from a bivariate binomial population, we derive maximum likelihood (ML) estimators for the two marginal parameters p1,p2: and the covariance parameter p11: It. turns out that the ML estimators for P1: and P2: are expressed explicitly in terms of the sample values, whereas the ML estimator for p11: can only be obtained numerically by iterative methods Two nu merical illustrations are also presented  相似文献   

15.
A smoothing procedure for discrete time failure data is proposed which allows for the inclusion of covariates. This purely nonparametric method is based on discrete or continuous kernel smoothing techniques that gives a compromise between the data and smoothness. The method may be used as an exploratory tool to uncover the underlying structure or as an alternative to parametric methods when prediction is the primary objective. Confidence intervals are considered and alternative techniques of cross validation based choices of smoothing parameters are investigated.  相似文献   

16.
In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296–1301] and on the Durbin–Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.  相似文献   

17.
We introduce a new class of flexible hazard rate distributions which have constant, increasing, decreasing, and bathtub-shaped hazard function. This class of distributions obtained by compounding the power and exponential hazard rate functions, which is called the power-exponential hazard rate distribution and contains several important lifetime distributions. We obtain some distributional properties of the new family of distributions. The estimation of parameters is obtained by using the maximum likelihood and the Bayesian methods under squared error, linear-exponential, and Stein’s loss functions. Also, approximate confidence intervals and HPD credible intervals of parameters are presented. An application to real dataset is provided to show that the new hazard rate distribution has a better fit than the other existing hazard rate distributions and some four-parameter distributions. Finally , to compare the performance of proposed estimators and confidence intervals, an extensive Monte Carlo simulation study is conducted.  相似文献   

18.
This paper considers the problem of identifying which treatments are strictly worse than the best treatment or treatments in a one-way layout, which has many important applications in screening trials for new product development. A procedure is proposed that selects a subset of the treatments containing only treatments that are known to be strictly worse than the best treatment or treatments. In addition, simultaneous confidence intervals are obtained which provide upper bounds on how inferior the treatments are compared with these best treatments. In this way, the new procedure shares the characteristics of both subset selection procedures and multiple comparison procedures. Some tables of critical points are provided for implementing the new procedure, and some examples of its use are given.  相似文献   

19.
Minimax squared error risk estimators of the mean of a multivariate normal distribution are characterized which have smallest Bayes risk with respect to a spherically symmetric prior distribution for (i) squared error loss, and (ii) zero-one loss depending on whether or not estimates are consistent with the hypothesis that the mean is null. In (i), the optimal estimators are the usual Bayes estimators for prior distributions with special structure. In (ii), preliminary test estimators are optimal. The results are obtained by applying the theory of minimax-Bayes-compromise decision problems.  相似文献   

20.
In this work improved point and interval estimation of the smallest scale parameter of independent gamma distributions with known shape parameters are studied in an integrated fashion. The approach followed is based on formulating the model in such a way that enables us to treat the estimation of the smallest scale parameter as a problem of estimating an unrestricted scale parameter in the presence of a nuisance parameter. The class of improved point and interval estimators is enriched. Within this class, a subclass of generalized Bayes estimators of a simple form is identified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号