首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The gamma distribution has been widely used in many research areas such as engineering and survival analysis. We present an extension of this distribution, called the Kummer beta gamma distribution, having greater flexibility to model scenarios involving skewed data. We derive analytical expressions for some mathematical quantities. The estimation of parameters is approached by the maximum likelihood method and Bayesian analysis. The likelihood ratio and formal goodness-of-fit tests are used to compare the presented distribution with some of its sub-models and non nested models. A real data set is used to illustrate the importance of the distribution.  相似文献   

2.
Abstract

This article introduces a parametric robust way of comparing two population means and two population variances. With large samples the comparison of two means, under model misspecification, is lesser a problem, for, the validity of inference is protected by the central limit theorem. However, the assumption of normality is generally required, so that the inference for the ratio of two variances can be carried out by the familiar F statistic. A parametric robust approach that is insensitive to the distributional assumption will be proposed here. More specifically, it will be demonstrated that the normal likelihood function can be adjusted for asymptotically valid inferences for all underlying distributions with finite fourth moments. The normal likelihood function, on the other hand, is itself robust for the comparison of two means so that no adjustment is needed.  相似文献   

3.
ABSTRACT

In this article we reconsider an estimator of population size previously advocated for use when sampling from a population subdivided into different types. We show that it may be usefully adopted in the simple equal-catchability model used in mark-recapture. Unlike the commonly used maximum likelihood estimator, this conditionally unbiased estimator is always finite-valued. Except in situations in which the data contain little relevant information, its performance, in terms of bias and precision, is seen to be at least as good as that of the maximum likelihood estimator. Two estimators of the standard deviation of the conditionally unbiased estimator are considered.  相似文献   

4.
ABSTRACT

The Tukey's gh distribution is widely used in situations where skewness and elongation are important features of the data. As the distribution is defined through a quantile transformation of the normal, the likelihood function cannot be written in closed form and exact maximum likelihood estimation is unfeasible. In this paper we exploit a novel approach based on a frequentist reinterpretation of Approximate Bayesian Computation for approximating the maximum likelihood estimates of the gh distribution. This method is appealing because it only requires the ability to sample the distribution. We discuss the choice of the input parameters by means of simulation experiments and provide evidence of superior performance in terms of Root-Mean-Square-Error with respect to the standard quantile estimator. Finally, we give an application to operational risk measurement.  相似文献   

5.
ABSTRACT

We introduce a universal robust likelihood approach for regression analysis of general count data. The robust likelihood function is able to accommodate a wide range of dispersion and is insensitive to model failures. We use simulations and real data analysis to demonstrate the merit of the robust procedure.  相似文献   

6.
ABSTRACT

A Bayesian analysis for the superposition of two dependent nonhomogenous Poisson processes is studied by means of a bivariate Poisson distribution. This particular distribution presents a new likelihood function which takes into account the correlation between the two nonhomogenous Poisson processes. A numerical example using Markov Chain Monte Carlo method with data augmentation is considered.  相似文献   

7.
We propose an estimation procedure for time-series regression models under the Bayesian inference framework. With the exact method of Wise [Wise, J. (1955). The autocorrelation function and spectral density function. Biometrika, 42, 151–159], an exact likelihood function can be obtained instead of the likelihood conditional on initial observations. The constraints on the parameter space arising from the stationarity conditions are handled by a reparametrization, which was not taken into consideration by Chib [Chib, S. (1993). Bayes regression with autoregressive errors: A Gibbs sampling approach. J. Econometrics, 58, 275–294] or Chib and Greenberg [Chib, S. and Greenberg, E. (1994). Bayes inference in regression model with ARMA(p, q) errors. J. Econometrics, 64, 183–206]. Simulation studies show that our method leads to better inferential results than their results.  相似文献   

8.
We consider a Bayesian analysis method of paired survival data using a bivariate exponential model proposed by Moran (1967, Biometrika 54:385–394). Important features of Moran’s model include that the marginal distributions are exponential and the range of the correlation coefficient is between 0 and 1. These contrast with the popular exponential model with gamma frailty. Despite these nice properties, statistical analysis with Moran’s model has been hampered by lack of a closed form likelihood function. In this paper, we introduce a latent variable to circumvent the difficulty in the Bayesian computation. We also consider a model checking procedure using the predictive Bayesian P-value.  相似文献   

9.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   

10.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

11.
ABSTRACT

The maximum likelihood and Bayesian approaches for estimating the parameters and the prediction of future record values for the Kumaraswamy distribution has been considered when the lower record values along with the number of observations following the record values (inter-record-times) have been observed. The Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. The Bayes and the maximum likelihood estimates are compared by using the estimated risk through Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record values arising from the Kumaraswamy distribution based on record values with their corresponding inter-record times and only record values. The comparison of the derived predictors are carried out by using Monte Carlo simulations. Real data are analysed for an illustration of the findings.  相似文献   

12.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

13.

A Bayesian approach is considered to detect the number of change points in simple linear regression models. A normal-gamma empirical prior for the regression parameters based on maximum likelihood estimator (MLE) is employed in the analysis. Under mild conditions, consistency for the number of change points and boundedness between the estimated location and the true location of the change points are established. The Bayesian approach to the detection of the number of change points is suitable whether the switching simple regression is continuous or discontinuous. Some simulation results are given to confirm the accuracy of the proposed estimator.  相似文献   

14.
Abstract

This paper investigates the statistical analysis of grouped accelerated temperature cycling test data when the product lifetime follows a Weibull distribution. A log-linear acceleration equation is derived from the Coffin-Manson model. The problem is transformed to a constant-stress accelerated life test with grouped data and multiple acceleration variables. The Jeffreys prior and reference priors are derived. Maximum likelihood estimation and Bayesian estimation with objective priors are obtained by applying the technique of data augmentation. A simulation study shows that both of these two methods perform well when sample size is large, and the Bayesian method gives better performance under small sample sizes.  相似文献   

15.

Consider the logistic linear model, with some explanatory variables overlooked. Those explanatory variables may be quantitative or qualitative. In either case, the resulting true response variable is not a binomial or a beta-binomial but a sum of binomials. Hence, standard computer packages for logistic regression can be inappropriate even if an overdispersion factor is incorporated. Therefore, a discrete exponential family assumption is considered to broaden the class of sampling models. Likelihood and Bayesian analyses are discussed. Bayesian computation techniques such as Laplacian approximations and Markov chain simulations are used to compute posterior densities and moments. Approximate conditional distributions are derived and are shown to be accurate. The Markov chain simulations are performed effectively to calculate posterior moments by using the approximate conditional distributions. The methodology is applied to Keeler's hardness of winter wheat data for checking binomial assumptions and to Matsumura's Accounting exams data for detailed likelihood and Bayesian analyses.  相似文献   

16.
Abstract

This work deals with the problem of Bayesian estimation of the transition probabilities associated with multistate Markov chain. The model is based on the Jeffreys' noninformative prior. The Bayesian estimator is approximated by means of MCMC techniques. A numerical study by simulation is done in order to compare the Bayesian estimator with the maximum likelihood estimator.  相似文献   

17.
ABSTRACT

We propose a generalization of the one-dimensional Jeffreys' rule in order to obtain non informative prior distributions for non regular models, taking into account the comments made by Jeffreys in his article of 1946. These non informatives are parameterization invariant and the Bayesian intervals have good behavior in frequentist inference. In some important cases, we can generate non informative distributions for multi-parameter models with non regular parameters. In non regular models, the Bayesian method offers a satisfactory solution to the inference problem and also avoids the problem that the maximum likelihood estimator has with these models. Finally, we obtain non informative distributions in job-search and deterministic frontier production homogenous models.  相似文献   

18.
Abstract

This paper deals with Bayesian estimation and prediction for the inverse Weibull distribution with shape parameter α and scale parameter λ under general progressive censoring. We prove that the posterior conditional density functions of α and λ are both log-concave based on the assumption that λ has a gamma prior distribution and α follows a prior distribution with log-concave density. Then, we present the Gibbs sampling strategy to estimate under squared-error loss any function of the unknown parameter vector (α, λ) and find credible intervals, as well as to obtain prediction intervals for future order statistics. Monte Carlo simulations are given to compare the performance of Bayesian estimators derived via Gibbs sampling with the corresponding maximum likelihood estimators, and a real data analysis is discussed in order to illustrate the proposed procedure. Finally, we extend the developed methodology to other two-parameter distributions, including the Weibull, Burr type XII, and flexible Weibull distributions, and also to general progressive hybrid censoring.  相似文献   

19.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号