首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is concerned with the well known Jeffreys–Lindley paradox. In a Bayesian set up, the so-called paradox arises when a point null hypothesis is tested and an objective prior is sought for the alternative hypothesis. In particular, the posterior for the null hypothesis tends to one when the uncertainty, i.e., the variance, for the parameter value goes to infinity. We argue that the appropriate way to deal with the paradox is to use simple mathematics, and that any philosophical argument is to be regarded as irrelevant.  相似文献   

2.
Some alternative Bayes Factors: Intrinsic, Posterior, and Fractional have been proposed to overcome the difficulties presented when prior information is weak and improper prior are used. Additional difficulties also appear when the models are separated or non nested. This article presents both simulation results and some illustrative examples analysis comparing these alternative Bayes factors to discriminate among the Lognormal, the Weibull, the Gamma, and the Exponential distributions. Simulation results are obtained for different sample sizes generated from the distributions. Results from simulations indicates that these alternative Bayes factors are useful for comparing non nested models. The simulations also show some similar behavior and that when both models are true they choose the simplest model. Some illustrative example are also presented.  相似文献   

3.
In this article, we consider Bayes prediction in a finite population under the simple location error-in-variables superpopulation model. Bayes predictor of the finite population mean under Zellner's balanced loss function and the corresponding relative losses and relative savings loss are derived. The prior distribution of the unknown location parameter of the model is assumed to have a non-normal distribution belonging to the class of Edgeworth series distributions. Effects of non normality of the “true” prior distribution and that of a possible misspecification of the loss function on the Bayes predictor are illustrated for a hypothetical population.  相似文献   

4.
In this paper, we consider a Bayesian mixture model that allows us to integrate out the weights of the mixture in order to obtain a procedure in which the number of clusters is an unknown quantity. To determine clusters and estimate parameters of interest, we develop an MCMC algorithm denominated by sequential data-driven allocation sampler. In this algorithm, a single observation has a non-null probability to create a new cluster and a set of observations may create a new cluster through the split-merge movements. The split-merge movements are developed using a sequential allocation procedure based in allocation probabilities that are calculated according to the Kullback–Leibler divergence between the posterior distribution using the observations previously allocated and the posterior distribution including a ‘new’ observation. We verified the performance of the proposed algorithm on the simulated data and then we illustrate its use on three publicly available real data sets.  相似文献   

5.
We consider Khamis' (1960) Laguerre expansion with gamma weight function as a class of “near-gamma” priors (K-prior) to obtain the Bayes predictor of a finite population mean under the Poisson regression superpopulation model using Zellner's balanced loss function (BLF). Kullback–Leibler (K-L) distance between gamma and some K-priors is tabulated to examine the quantitative prior robustness. Some numerical investigations are also conducted to illustrate the effects of a change in skewness and/or kurtosis on the Bayes predictor and the corresponding minimal Bayes predictive expected loss (MBPEL). Loss robustness with respect to the class of BLFs is also examined in terms of relative savings loss (RSL).  相似文献   

6.
ABSTRACT

The aim of this paper is obtaining the amount of information there exists in the Pareto distribution in the presence of outliers. For the sake of this purpose, Shannon entropy, ?-entropy, Fisher information, and Kullback–Leibler distance are computed. Furthermore, a section has been devoted to compare these quantities in these two cases of the Pareto distribution (with outliers and the homogenous case). At the end of this paper, two actual examples, which are related to insurance companies, are brought. A brief summary of which is done in this work is also reported.  相似文献   

7.
In this article, we estimate the parameters of exponential Pareto II distribution by two new methods. The first one is based on the principle of maximum entropy (POME) and the second is by Kullback–Leibler divergence of survival function (KLS). Monte Carlo simulated data are used to evaluate these methods and compare them with the maximum likelihood method. Finally, we fit this distribution to a set of real data by estimation procedures.  相似文献   

8.
In this paper, we first present two characterizations of the exponential distribution and next introduce three exact goodness-of-fit test for exponentiality. By simulation, the powers of the proposed tests under various alternatives are compared with the existing tests.  相似文献   

9.
The authors propose a procedure for determining the unknown number of components in mixtures by generalizing a Bayesian testing method proposed by Mengersen & Robert (1996). The testing criterion they propose involves a Kullback‐Leibler distance, which may be weighted or not. They give explicit formulas for the weighted distance for a number of mixture distributions and propose a stepwise testing procedure to select the minimum number of components adequate for the data. Their procedure, which is implemented using the BUGS software, exploits a fast collapsing approach which accelerates the search for the minimum number of components by avoiding full refitting at each step. The performance of their method is compared, using both distances, to the Bayes factor approach.  相似文献   

10.
This paper deals with the nonparametric estimation of the mean and variance functions of univariate time series data. We propose a nonparametric dimension reduction technique for both mean and variance functions of time series. This method does not require any model specification and instead we seek directions in both the mean and variance functions such that the conditional distribution of the current observation given the vector of past observations is the same as that of the current observation given a few linear combinations of the past observations without loss of inferential information. The directions of the mean and variance functions are estimated by maximizing the Kullback–Leibler distance function. The consistency of the proposed estimators is established. A computational procedure is introduced to detect lags of the conditional mean and variance functions in practice. Numerical examples and simulation studies are performed to illustrate and evaluate the performance of the proposed estimators.  相似文献   

11.
We compare different Bayesian strategies for testing a parametric model versus a nonparametric alternative on the ground of their ability to solve the inconsistency problems arising when using the Bayes factor under certain conditions. A preliminary critical discussion of such an inconsistency is provided.  相似文献   

12.
Bayesian alternatives to classical tests for several testing problems are considered. One-sided and two-sided sets of hypotheses are tested concerning an exponential parameter, a Binomial proportion, and a normal mean. Hierarchical Bayes and noninformative Bayes procedures are compared with the appropriate classical procedure, either the uniformly most powerful test or the likelihood ratio test, in the different situations. The hierarchical prior employed is the conjugate prior at the first stage with the mean being the test parameter and a noninformative prior at the second stage for the hyper parameter(s) of the first stage prior. Fair comparisons are attempted in which fair means the likelihood of making a type I error is approximately the same for the different testing procedures; once this condition is satisfied, the power of the different tests are compared, the larger the power, the better the test. This comparison is difficult in the two-sided case due to the unsurprising discrepancy between Bayesian and classical measures of evidence that have been discussed for years. The hierarchical Bayes tests appear to compete well with the typical classical test in the one-sided cases.  相似文献   

13.
We consider an approach to prediction in linear model when values of the future explanatory variables are unavailable, we predict a future response y f at a future sample point x f when some components of x f are unavailable. We consider both the cases where x f are dependent and independent but normally distributed. A Taylor expansion is used to derive an approximation to the predictive density, and the influence of missing future explanatory variables (the loss or discrepancy) is assessed using the Kullback–Leibler measure of divergence. This discrepancy is compared in different scenarios including the situation where the missing variables are dropped entirely.  相似文献   

14.
For comparing two cumulative hazard functions, we consider an extension of the Kullback–Leibler information to the cumulative hazard function, which is concerning the ratio of cumulative hazard functions. Then we consider its estimate as a goodness-of-fit test with the Type II censored data. For an exponential null distribution, the proposed test statistic is shown to outperform other test statistics based on the empirical distribution function in the heavy censoring case against the increasing hazard alternatives.  相似文献   

15.
This article presents methods for testing covariate effect in the Cox proportional hazards model based on Kullback–Leibler divergence and Renyi's information measure. Renyi's measure is referred to as the information divergence of order γ (γ ≠ 1) between two distributions. In the limiting case γ → 1, Renyi's measure becomes Kullback–Leibler divergence. In our case, the distributions correspond to the baseline and one possibly due to a covariate effect. Our proposed statistics are simple transformations of the parameter vector in the Cox proportional hazards model, and are compared with the Wald, likelihood ratio and score tests that are widely used in practice. Finally, the methods are illustrated using two real-life data sets.  相似文献   

16.
ABSTRACT

In this paper, we first consider the entropy estimators introduced by Vasicek [A test for normality based on sample entropy. J R Statist Soc, Ser B. 1976;38:54–59], Ebrahimi et al. [Two measures of sample entropy. Stat Probab Lett. 1994;20:225–234], Yousefzadeh and Arghami [Testing exponentiality based on type II censored data and a new cdf estimator. Commun Stat – Simul Comput. 2008;37:1479–1499], Alizadeh Noughabi and Arghami [A new estimator of entropy. J Iran Statist Soc. 2010;9:53–64], and Zamanzade and Arghami [Goodness-of-fit test based on correcting moments of modified entropy estimator. J Statist Comput Simul. 2011;81:2077–2093], and the nonparametric distribution functions corresponding to them. We next introduce goodness-of-fit test statistics for the Laplace distribution based on the moments of nonparametric distribution functions of the aforementioned estimators. We obtain power estimates of the proposed test statistics with Monte Carlo simulation and compare them with the competing test statistics against various alternatives. Performance of the proposed new test statistics is illustrated in real cases.  相似文献   

17.
Entropy-based goodness-of-fit test statistics can be established by estimating the entropy difference or Kullback–Leibler information, and several entropy-based test statistics based on various entropy estimators have been proposed. In this article, we first give comments on some problems resulting from not satisfying the moment constraints. We then study the choice of the entropy estimator by noting the reason why a test based on a better entropy estimator does not necessarily provide better powers.  相似文献   

18.
Bayesian alternatives to the sign test are proposed which incorporate the number of ties observed. These alternatives arise from different strategies in dealing with the number of ties. One strategy is incorporating the true proportion of ties into the hypotheses of interest. The Bayesian methods are compared to each other and to the typical sign test in a simulation study. Also, the new methods are compared to another version of the sign test proposed by Coakley and Heise (1996). This new version of the sign test was shown to perform especially well in situations where the probability of observing a tie is very high. Although one of the Bayesian methods appears to perform best overall in the simulation study, its performance is not dominating and the easy to use typical sign test generally performs very well.  相似文献   

19.
This article deals with Bayes factors as useful Bayesian tools in frequentist testing of a precise hypothesis. A result and several examples are included to justify the definition of Bayes factor for point null hypotheses, without merging the initial distribution with a degenerate distribution on the null hypothesis. Of special interest is the problem of testing a proportion (joint with a natural criterion to compare different tests), the possible presence of nuisance parameters, or the influence of Bayesian sufficiency on this problem. The problem of testing a precise hypothesis under a Bayesian perspective is also considered and two alternative methods to deal with are given.  相似文献   

20.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号