首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, point and interval estimations of the parameters α and β of the inverse Weibull distribution (IWD) have been studied based on Balakrishnan’s unified hybrid censoring scheme (UHCS), see Balakrishnan et al. In point estimation, the maximum likelihood (ML) and Bayes (B) methods have been used. The Bayes estimates have been computed based on squared error loss (SEL) function and Linex loss function and using Markov Chain Monte Carlo (MCMC) algorithm. In interval estimation, a (1 ? τ) × 100% approximate, bootstrap-p, credible and highest posterior density (HPD) confidence intervals (CIs) for the parameters α and β have been introduced. Based on Monte Carlo simulation, Bayes estimates have been compared with their corresponding maximum likelihood estimates by computing the mean squared errors (MSEs) of all estimators. Finally, point and interval estimations of all parameters have been studied based on a real data set as an illustrative example.  相似文献   

2.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   

3.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction.  相似文献   

4.
In the literature, assuming independence of random variables X and Y, statistical estimation of the stress–strength parameter R = P(X > Y) is intensively investigated. However, in some real applications, the strength variable X could be highly dependent on the stress variable Y. In this paper, unlike the common practice in the literature, we discuss on estimation of the parameter R where more realistically X and Y are dependent random variables distributed as bivariate Rayleigh model. We derive the Bayes estimates and highest posterior density credible intervals of the parameters using suitable priors on the parameters. Because there are not closed forms for the Bayes estimates, we will use an approximation based on Laplace method and a Markov Chain Monte Carlo technique to obtain the Bayes estimate of R and unknown parameters. Finally, simulation studies are conducted in order to evaluate the performances of the proposed estimators and analysis of two data sets are provided.  相似文献   

5.
The main goal of this paper is to develop the approximate Bayes estimation of the five-dimensional vector of the parameters and reliability function of a mixture of two inverse Weibull distributions (MTIWD) under Type-2 censoring. Usually, the posterior distribution is complicated under the scheme of Type-2 censoring and the integrals that are involved cannot be obtained in a simple explicit form. In this study, we use Lindley's [Approximate Bayesian method, Trabajos Estadist. 31 (1980), pp. 223–237] approximate form of Bayes estimation in the case of an MTIWD under Type-2 censoring. Later, we calculate the estimated risks (ERs) of the Bayes estimates and compare them with the corresponding ERs of the maximum-likelihood estimates through Monte Carlo simulation. Finally, we analyse a real data set using the findings.  相似文献   

6.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

7.
This study focuses on the classical and Bayesian analysis of a k-components load-sharing parallel system in which components have time-dependent failure rates. In the classical set up, the maximum likelihood estimates of the load-share parameters with their standard errors (SEs) are obtained. (1?γ) 100% simultaneous and two bootstrap confidence intervals for the parameters and system reliability and hazard functions have been constructed. Further, on recognizing the fact that life-testing experiments are very time consuming, the parameters involved in the failure time distribution of the system are expected to follow some random variations. Therefore, Bayes estimates along with their posterior SEs of the parameters and system reliability and hazard functions are obtained by assuming gamma and Jeffrey's priors of the unknown parameters. Markov chain Monte Carlo technique such as Gibbs sampler has been used to obtain Bayes estimates and highest posterior density credible intervals.  相似文献   

8.
9.
Skewed distributions have attracted significant attention in the last few years. In this article, a skewed Bessel function distribution with the probability density function (pdf) f(x)=2 g (xGx) is introduced, where g (·) and G (·) are taken, respectively, to be the (pdf) and the cumulative distribution function of the Bessel function distribution [McKay, A.T., 1932, A Bessel function distribution, Biometrica, 24, 39–44]. Several particular cases of this distribution are identified and various representations for its moments derived. Estimation procedures by the method of maximum likelihood are also derived. Finally, an application is provided to rainfall data from Orlando, Florida.  相似文献   

10.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further, recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions, Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters. Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior distributions of the unknown parameters.  相似文献   

11.
This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities.Many parameters, such as variance, may be expressed or approximated as linear combinations of products of expectations. The estimates of such parameters may be expressed as the same linear combinations of products of averages. The moments and cumulants of such estimates may be computed in a straightforward way if the terms of the estimates, moments and cumulants are represented as lists and the expectation operation defined as a transformation of lists. Vector space considerations lead to a unique representation of terms and hence to a simplification of results. Basic identities relating variables and their expectations induce transformations of lists, which transformations may be computed from the identities. In this way procedures for complex calculations are computed from basic identities.The procedures permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions. The examples include the calculation of unbiased estimates of cumulants, of cumulants of these, and of moments of bootstrap estimates.  相似文献   

12.
A nonparametric mixture model specifies that observations arise from a mixture distribution, ∫ f(x, θ) dG(θ), where the mixing distribution G is completely unspecified. A number of algorithms have been developed to obtain unconstrained maximum-likelihood estimates of G, but none of these algorithms lead to estimates when functional constraints are present. In many cases, there is a natural interest in functional ?(G), such as the mean and variance, of the mixing distribution, and profile likelihoods and confidence intervals for ?(G) are desired. In this paper we develop a penalized generalization of the ISDM algorithm of Kalbfleisch and Lesperance (1992) that can be used to solve the problem of constrained estimation. We also discuss its use in various different applications. Convergence results and numerical examples are given for the generalized ISDM algorithm, and asymptotic results are developed for the likelihood-ratio test statistics in the multinomial case.  相似文献   

13.
A compound class of zero truncated Poisson and lifetime distributions is introduced. A specialization is paved to a new three-parameter distribution, called doubly Poisson-exponential distribution, which may represent the lifetime of units connected in a series-parallel system. The new distribution can be obtained by compounding two zero truncated Poisson distributions with an exponential distribution. Among its motivations is that its hazard rate function can take different shapes such as decreasing, increasing and upside-down bathtub depending on the values of its parameters. Several properties of the new distribution are discussed. Based on progressive type-II censoring, six estimation methods [maximum likelihood, moments, least squares, weighted least squares and Bayes (under linear-exponential and general entropy loss functions) estimations] are used to estimate the involved parameters. The performance of these methods is investigated through a simulation study. The Bayes estimates are obtained using Markov chain Monte Carlo algorithm. In addition, confidence intervals, symmetric credible intervals and highest posterior density credible intervals of the parameters are obtained. Finally, an application to a real data set is used to compare the new distribution with other five distributions.  相似文献   

14.
To summarize a set of data by a distribution function in Johnson's translation system, we use a least-squares approach to parameter estimation wherein we seek to minimize the distance between the vector of "uniformized" oeder statistics and the corresponding vector of expected values. We use the software package FITTRI to apply this technique to three problems arising respectively in medicine, applied statistics, and civil engineering. Compared to traditional methods of distribution fitting based on moment matching, percentile matchingL 1 estimation, and L ? estimation, the least-squares technique is seen to yield fits of similar accuracy and to converge more rapidly and reliably to a set of acceptable parametre estimates.  相似文献   

15.
In this paper, the Bayesian approach is applied to the estimation problem in the case of step stress partially accelerated life tests with two stress levels and type-I censoring. Gompertz distribution is considered as a lifetime model. The posterior means and posterior variances are derived using the squared-error loss function. The Bayes estimates cannot be obtained in explicit forms. Approximate Bayes estimates are computed using the method of Lindley [D.V. Lindley, Approximate Bayesian methods, Trabajos Estadistica 31 (1980), pp. 223–237]. The advantage of this proposed method is shown. The approximate Bayes estimates obtained under the assumption of non-informative priors are compared with their maximum likelihood counterparts using Monte Carlo simulation.  相似文献   

16.
The proportional odds model (POM) is commonly used in regression analysis to predict the outcome for an ordinal response variable. The maximum likelihood estimation (MLE) approach is typically used to obtain the parameter estimates. The likelihood estimates do not exist when the number of parameters, p, is greater than the number of observations n. The MLE also does not exist if there are no overlapping observations in the data. In a situation where the number of parameters is less than the sample size but p is approaching to n, the likelihood estimates may not exist, and if they exist they may have quite large standard errors. An estimation method is proposed to address the last two issues, i.e. complete separation and the case when p approaches n, but not the case when p>n. The proposed method does not use any penalty term but uses pseudo-observations to regularize the observed responses by downgrading their effect so that they become close to the underlying probabilities. The estimates can be computed easily with all commonly used statistical packages supporting the fitting of POMs with weights. Estimates are compared with MLE in a simulation study and an application to the real data.  相似文献   

17.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

18.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   

19.
In this article, we consider a Bayesian analysis of a possible change in the parameters of autoregressive time series of known order p, AR(p). An unconditional Bayesian test based on highest posterior density (HPD) credible sets is determined. The test is useful to detect a change in any one of the parameters separately. Using the Gibbs sampler algorithm, we approximate the posterior densities of the change point and other parameters to calculate the p-values that define our test.  相似文献   

20.
For a normal model with a conjugate prior, we provide an in-depth examination of the effects of the hyperparameters on the long-run frequentist properties of posterior point and interval estimates. Under an assumed sampling model for the data-generating mechanism, we examine how hyperparameter values affect the mean-squared error (MSE) of posterior means and the true coverage of credible intervals. We develop two types of hyperparameter optimality. MSE optimal hyperparameters minimize the MSE of posterior point estimates. Credible interval optimal hyperparameters result in credible intervals that have a minimum length while still retaining nominal coverage. A poor choice of hyperparameters has a worse consequence on the credible interval coverage than on the MSE of posterior point estimates. We give an example to demonstrate how our results can be used to evaluate the potential consequences of hyperparameter choices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号