首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this paper, the stress-strength reliability, R, is estimated in type II censored samples from Pareto distributions. The classical inference includes obtaining the maximum likelihood estimator, an exact confidence interval, and the confidence intervals based on Wald and signed log-likelihood ratio statistics. Bayesian inference includes obtaining Bayes estimator, equi-tailed credible interval, and highest posterior density (HPD) interval given both informative and non-informative prior distributions. Bayes estimator of R is obtained using four methods: Lindley's approximation, Tierney-Kadane method, Monte Carlo integration, and MCMC. Also, we compare the proposed methods by simulation study and provide a real example to illustrate them.  相似文献   

2.
Confidence interval construction the difference in mean event rates for two Index independent , Poisson samples is discussed. Intervals are derived by considering Bayes estimates of the mean event rates using a family of noninformative priors. The coverage probabilities of the proposed are compared to those of the standard Wald interval for of observed events. A compromise method of constructing interval based on the data is suggested and its properties are evaluated. The method is illustrated in several examples.  相似文献   

3.
In this article, we consider the problem of estimation of the stress–strength parameter δ?=?P(Y?<?X) based on progressively first-failure-censored samples, when X and Y both follow two-parameter generalized inverted exponential distribution with different and unknown shape and scale parameters. The maximum likelihood estimator of δ and its asymptotic confidence interval based on observed Fisher information are constructed. Two parametric bootstrap boot-p and boot-t confidence intervals are proposed. We also apply Markov Chain Monte Carlo techniques to carry out Bayes estimation procedures. Bayes estimate under squared error loss function and the HPD credible interval of δ are obtained using informative and non-informative priors. A Monte Carlo simulation study is carried out for comparing the proposed methods of estimation. Finally, the methods developed are illustrated with a couple of real data examples.  相似文献   

4.
The well-known Wilson and Agresti–Coull confidence intervals for a binomial proportion p are centered around a Bayesian estimator. Using this as a starting point, similarities between frequentist confidence intervals for proportions and Bayesian credible intervals based on low-informative priors are studied using asymptotic expansions. A Bayesian motivation for a large class of frequentist confidence intervals is provided. It is shown that the likelihood ratio interval for p approximates a Bayesian credible interval based on Kerman’s neutral noninformative conjugate prior up to O(n? 1) in the confidence bounds. For the significance level α ? 0.317, the Bayesian interval based on the Jeffreys’ prior is then shown to be a compromise between the likelihood ratio and Wilson intervals. Supplementary materials for this article are available online.  相似文献   

5.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   

6.
This paper discusses the classic but still current problem of interval estimation of a binomial proportion. Bootstrap methods are presented for constructing such confidence intervals in a routine, automatic way. Three confidence intervals for a binomial proportion are compared and studied by means of a simulation study, namely: the Wald confidence interval, the Agresti–Coull interval and the bootstrap-t interval. A new confidence interval, the Agresti–Coull interval with bootstrap critical values, is also introduced and its good behaviour related to the average coverage probability is established by means of simulations.  相似文献   

7.
Large-sample Wilson-type confidence intervals (CIs) are derived for a parameter of interest in many clinical trials situations: the log-odds-ratio, in a two-sample experiment comparing binomial success proportions, say between cases and controls. The methods cover several scenarios: (i) results embedded in a single 2 × 2 contingency table; (ii) a series of K 2 × 2 tables with common parameter; or (iii) K tables, where the parameter may change across tables under the influence of a covariate. The calculations of the Wilson CI require only simple numerical assistance, and for example are easily carried out using Excel. The main competitor, the exact CI, has two disadvantages: It requires burdensome search algorithms for the multi-table case and results in strong over-coverage associated with long confidence intervals. All the application cases are illustrated through a well-known example. A simulation study then investigates how the Wilson CI performs among several competing methods. The Wilson interval is shortest, except for very large odds ratios, while maintaining coverage similar to Wald-type intervals. An alternative to the Wald CI is the Agresti-Coull CI, calculated from the Wilson and Wald CIs, which has same length as the Wald CI but improved coverage.  相似文献   

8.
One critical issue in the Bayesian approach is choosing the priors when there is not enough prior information to specify hyperparameters. Several improper noninformative priors for capture-recapture models were proposed in the literature. It is known that the Bayesian estimate can be sensitive to the choice of priors, especially when sample size is small to moderate. Yet, how to choose a noninformative prior for a given model remains a question. In this paper, as the first step, we consider the problem of estimating the population size for MtMt model using noninformative priors. The MtMt model has prodigious application in wildlife management, ecology, software liability, epidemiological study, census under-count, and other research areas. Four commonly used noninformative priors are considered. We find that the choice of noninformative priors depends on the number of sampling occasions only. The guidelines on the choice of noninformative priors are provided based on the simulation results. Propriety of applying improper noninformative prior is discussed. Simulation studies are developed to inspect the frequentist performance of Bayesian point and interval estimates with different noninformative priors under various population sizes, capture probabilities, and the number of sampling occasions. The simulation results show that the Bayesian approach can provide more accurate estimates of the population size than the MLE for small samples. Two real-data examples are given to illustrate the method.  相似文献   

9.
The use of Mathematica in deriving mean likelihood estimators is discussed. Comparisons are made between the mean likelihood estimator, the maximum likelihood estimator, and the Bayes estimator based on a Jeffrey's noninformative prior. These estimators are compared using the mean-square error criterion and Pitman measure of closeness. In some cases it is possible, using Mathematica, to derive exact results for these criteria. Using Mathematica, simulation comparisons among the criteria can be made for any model for which we can readily obtain estimators.In the binomial and exponential distribution cases, these criteria are evaluated exactly. In the first-order moving-average model, analytical comparisons are possible only for n = 2. In general, we find that for the binomial distribution and the first-order moving-average time series model the mean likelihood estimator outperforms the maximum likelihood estimator and the Bayes estimator with a Jeffrey's noninformative prior. Mathematica was used for symbolic and numeric computations as well as for the graphical display of results. A Mathematica notebook which provides the Mathematica code used in this article is available: http://www.stats.uwo.ca/mcleod/epubs/mele. Our article concludes with our opinions and criticisms of the relative merits of some of the popular computing environments for statistics researchers.  相似文献   

10.
Let X and Y be independent random variables distributed as generalized Lindley distribution type 5 (GLD5). This article deals with the estimation of the stress–strength parameter R = P(Y < X), which plays an important role in reliability analysis. For this purpose, the maximum likelihood and the uniformly minimum variance unbiased estimators are presented in the explicit form. Moreover, considering Arnold and Strauss’ bivariate Gamma distribution as an informative prior and Jeffreys’ as noninformative prior, the Bayes estimators are derived. Various bootstrap confidence intervals are also proposed and, finally, the presented methods are compared using a simulation study.  相似文献   

11.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

12.
We consider confidence intervals for the stress–strength reliability Pr(X< Y) in the two-parameter exponential distribution. We have derived the Bayesian highest posterior density interval using non-informative prior distributions. We have compared its performance with the intervals based on the generalized pivot variable intervals in terms of their coverage probabilities and expected lengths. Our simulation study shows that the Bayesian interval performs better according to the criteria used, especially when the sample sizes are very small. An example is given.  相似文献   

13.
In this article, point and interval estimations of the parameters α and β of the inverse Weibull distribution (IWD) have been studied based on Balakrishnan’s unified hybrid censoring scheme (UHCS), see Balakrishnan et al. In point estimation, the maximum likelihood (ML) and Bayes (B) methods have been used. The Bayes estimates have been computed based on squared error loss (SEL) function and Linex loss function and using Markov Chain Monte Carlo (MCMC) algorithm. In interval estimation, a (1 ? τ) × 100% approximate, bootstrap-p, credible and highest posterior density (HPD) confidence intervals (CIs) for the parameters α and β have been introduced. Based on Monte Carlo simulation, Bayes estimates have been compared with their corresponding maximum likelihood estimates by computing the mean squared errors (MSEs) of all estimators. Finally, point and interval estimations of all parameters have been studied based on a real data set as an illustrative example.  相似文献   

14.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

15.
In this paper, we consider the problem of making statistical inference for a truncated normal distribution under progressive type I interval censoring. We obtain maximum likelihood estimators of unknown parameters using the expectation-maximization algorithm and in sequel, we also compute corresponding midpoint estimates of parameters. Estimation based on the probability plot method is also considered. Asymptotic confidence intervals of unknown parameters are constructed based on the observed Fisher information matrix. We obtain Bayes estimators of parameters with respect to informative and non-informative prior distributions under squared error and linex loss functions. We compute these estimates using the importance sampling procedure. The highest posterior density intervals of unknown parameters are constructed as well. We present a Monte Carlo simulation study to compare the performance of proposed point and interval estimators. Analysis of a real data set is also performed for illustration purposes. Finally, inspection times and optimal censoring plans based on the expected Fisher information matrix are discussed.  相似文献   

16.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   

17.
In this paper we consider confidence intervals for the ratio of two population variances. We propose a confidence interval for the ratio of two variances based on the t-statistic by deriving its Edgeworth expansion and considering Hall's and Johnson's transformations. Then, we consider the coverage accuracy of suggested intervals and intervals based on the F-statistic for some distributions.  相似文献   

18.
The confidence interval (CI) for the difference between two proportions has been an important and active research topic, especially in the context of non-inferiority hypothesis testing. Issues concerning the Type 1 error rate, power, coverage rate and aberrations have been extensively studied for non-stratified cases. However, stratified confidence intervals are frequently used in non-inferiority trials and similar settings. In this paper, several methods for stratified confidence intervals for the difference between two proportions, including existing methods and novel extensions from unstratified CIs, are evaluated across different scenarios. When sparsity across the strata is not a concern, adding imputed observations to the stratification analysis can strengthen Type-1 error control without substantial loss of power. When sparseness of data is a concern, most of the evaluated methods fail to control Type-1 error; the modified stratified t-test CI is an exception. We recommend the modified stratified t-test CI as the most useful and flexible method across the respective scenarios; the modified stratified Wald CI may be useful in settings where sparsity is unlikely. These findings substantially contribute to the application of stratified CIs for non-inferiority testing of differences between two proportions.  相似文献   

19.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号