首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Modern theory for statistical hypothesis testing can broadly be classified as Bayesian or frequentist. Unfortunately, one can reach divergent conclusions if Bayesian and frequentist approaches are applied in parallel to analyze the same data set. This is a serious impasse since there is a lack of consensus on when to use one approach in detriment of the other. However, this conflict can be resolved. The present paper shows the existence of a perfect equivalence between Bayesian and frequentist methods for testing. Hence, Bayesian and frequentist decision rules can always be calibrated, in both directions, in order to present concordant results.  相似文献   

2.
Many credit risk models are based on the selection of a single logistic regression model, on which to base parameter estimation. When many competing models are available, and without enough guidance from economical theory, model averaging represents an appealing alternative to the selection of single models. Despite model averaging approaches have been present in statistics for many years, only recently they are starting to receive attention in economics and finance applications. This contribution shows how Bayesian model averaging can be applied to credit risk estimation, a research area that has received a great deal of attention recently, especially in the light of the global financial crisis of the last few years and the correlated attempts to regulate international finance. The paper considers the use of logistic regression models under the Bayesian Model Averaging paradigm. We argue that Bayesian model averaging is not only more correct from a theoretical viewpoint, but also slightly superior, in terms of predictive performance, with respect to single selected models.  相似文献   

3.
Recent approaches to the statistical analysis of adverse event (AE) data in clinical trials have proposed the use of groupings of related AEs, such as by system organ class (SOC). These methods have opened up the possibility of scanning large numbers of AEs while controlling for multiple comparisons, making the comparative performance of the different methods in terms of AE detection and error rates of interest to investigators. We apply two Bayesian models and two procedures for controlling the false discovery rate (FDR), which use groupings of AEs, to real clinical trial safety data. We find that while the Bayesian models are appropriate for the full data set, the error controlling methods only give similar results to the Bayesian methods when low incidence AEs are removed. A simulation study is used to compare the relative performances of the methods. We investigate the differences between the methods over full trial data sets, and over data sets with low incidence AEs and SOCs removed. We find that while the removal of low incidence AEs increases the power of the error controlling procedures, the estimated power of the Bayesian methods remains relatively constant over all data sizes. Automatic removal of low-incidence AEs however does have an effect on the error rates of all the methods, and a clinically guided approach to their removal is needed. Overall we found that the Bayesian approaches are particularly useful for scanning the large amounts of AE data gathered.  相似文献   

4.
This study investigates the Bayesian appeoach to the analysis of parired responess when the responses are categorical. Using resampling and analytical procedures, inferences for homogeneity and agreement are develped. The posterior analysis is based on the Dirichlet distribution from which repeated samples can be geneated with a random number generator. Resampling and analytical techniques are employed to make Bayesian inferences, and when it is not appropriate to use analytical procedures, resampling techniques are easily implemented. Bayesian methodoloogy is illustrated with several examples and the results show that they are exacr-small sample procedures that can easily solve inference problems for matched designs.  相似文献   

5.
Common software release procedures based on statistical techniques try to optimize the trade-off between further testing costs and costs due to remaining errors. We propose new software release procedures where the aim is to certify with a certain confidence level that the software does not contain errors. The underlying model is a discrete time model similar to the geometric Moranda model. The decisions are based on a mix of classical and Bayesian approaches to sequential testing and do not require any assumption on the initial number of errors.  相似文献   

6.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

7.
We introduce classical approaches for testing hypotheses on the meiosis I non disjunction fraction in trisomies, such as the likelihood-ratio, bootstrap, and Monte Carlo procedures. To calculate the p-values for the bootstrap and Monte Carlo procedures, different transformations in the data are considered. Bootstrap confidence intervals are also used as a tool to perform hypotheses tests. A Monte Carlo study is carried out to compare the proposed test procedures with two Bayesian ones: Jeffreys and Pereira-Stern tests. The results show that the likelihood-ratio and the Bayesian tests present the best performance. Down syndrome data are analyzed to illustrate the procedures.  相似文献   

8.
There has recently been increasing demand for better designs to conduct first‐into‐man dose‐escalation studies more efficiently, more accurately and more quickly. The authors look into the Bayesian decision‐theoretic approach and use simulation as a tool to investigate the impact of compromises with conventional practice that might make the procedures more acceptable for implementation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract

In this paper, we propose a Bayesian two-stage design with changing hypothesis test by bridging a single-arm study and a double-arm randomized trial in one phase II clinical trial based on continuous endpoints rather than binary endpoints. We have also calibrated with respect to frequentist and Bayesian error rates. The proposed design minimizes the Bayesian expected sample size if the new candidate has low or high efficacy activity subject to the constraint upon error rates in both frequentist and Bayesian perspectives. Tables of designs for various combinations of design parameters are also provided.  相似文献   

10.
One of the common approaches to the problem of scaling and combining examination marks has its roots in the least squares tradition. In this framework, examiners' preconceptions about the transformations must be dealt with in an ad hoc way. This paper investigates another, likelihood-based, approach which allows preconceptions to be handled by standard Bayesian techniques. The likelihood and least squares approaches are not directly parallel (essentially because a Jacobian must be included in the likelihood). Nonetheless, the device of introducing fictitious candidates to deal with prior beliefs in the least squares set-up can be understood in a Bayesian sense.  相似文献   

11.
Failure time models are considered when there is a subpopulation of individuals that is immune, or not susceptible, to an event of interest. Such models are of considerable interest in biostatistics. The most common approach is to postulate a proportion p of immunes or long-term survivors and to use a mixture model [5]. This paper introduces the defective inverse Gaussian model as a cure model and examines the use of the Gibbs sampler together with a data augmentation algorithm to study Bayesian inferences both for the cured fraction and the regression parameters. The results of the Bayesian and likelihood approaches are illustrated on two real data sets.  相似文献   

12.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.  相似文献   

13.
A major recent development in statistics has been the use of fast computational methods of Markov chain Monte Carlo. These procedures allow Bayesian methods to be used in quite complex modelling situations. In this paper, we shall use a range of real data examples involving lapwings, shags, teal, dippers, and herring gulls, to illustrate the power and range of Bayesian techniques. The topics include: prior sensitivity; the use of reversible-jump MCMC for constructing model probabilities and comparing models, with particular reference to models with random effects; model-averaging; and the construction of Bayesian measures of goodness-of-fit. Throughout, there will be discussion of the practical aspects of the work - for instance explaining when and when not to use the BUGS package.  相似文献   

14.
A major recent development in statistics has been the use of fast computational methods of Markov chain Monte Carlo. These procedures allow Bayesian methods to be used in quite complex modelling situations. In this paper, we shall use a range of real data examples involving lapwings, shags, teal, dippers, and herring gulls, to illustrate the power and range of Bayesian techniques. The topics include: prior sensitivity; the use of reversible-jump MCMC for constructing model probabilities and comparing models, with particular reference to models with random effects; model-averaging; and the construction of Bayesian measures of goodness-of-fit. Throughout, there will be discussion of the practical aspects of the work - for instance explaining when and when not to use the BUGS package.  相似文献   

15.
Model selection procedures often depend explicitly on the sample size n of the experiment. One example is the Bayesian information criterion (BIC) criterion and another is the use of Zellner–Siow priors in Bayesian model selection. Sample size is well-defined if one has i.i.d real observations, but is not well-defined for vector observations or in non-i.i.d. settings; extensions of critera such as BIC to such settings thus requires a definition of effective sample size that applies also in such cases. A definition of effective sample size that applies to fairly general linear models is proposed and illustrated in a variety of situations. The definition is also used to propose a suitable ‘scale’ for default proper priors for Bayesian model selection.  相似文献   

16.
This paper compares the Bayesian and frequentist approaches to testing a one-sided hypothesis about a multivariate mean. First, this paper proposes a simple way to assign a Bayesian posterior probability to one-sided hypotheses about a multivariate mean. The approach is to use (almost) the exact posterior probability under the assumption that the data has multivariate normal distribution, under either a conjugate prior in large samples or under a vague Jeffreys prior. This is also approximately the Bayesian posterior probability of the hypothesis based on a suitably flat Dirichlet process prior over an unknown distribution generating the data. Then, the Bayesian approach and a frequentist approach to testing the one-sided hypothesis are compared, with results that show a major difference between Bayesian reasoning and frequentist reasoning. The Bayesian posterior probability can be substantially smaller than the frequentist p-value. A class of example is given where the Bayesian posterior probability is basically 0, while the frequentist p-value is basically 1. The Bayesian posterior probability in these examples seems to be more reasonable. Other drawbacks of the frequentist p-value as a measure of whether the one-sided hypothesis is true are also discussed.  相似文献   

17.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

18.
Robust Bayesian testing of point null hypotheses is considered for problems involving the presence of nuisance parameters. The robust Bayesian approach seeks answers that hold for a range of prior distributions. Three techniques for handling the nuisance parameter are studied and compared. They are (i) utilize a noninformative prior to integrate out the nuisance parameter; (ii) utilize a test statistic whose distribution does not depend on the nuisance parameter; and (iii) use a class of prior distributions for the nuisance parameter. These approaches are studied in two examples, the univariate normal model with unknown mean and variance, and a multivariate normal example.  相似文献   

19.
This paper develops a natural conjugate prior for the non-homogeneous Poisson process (NHPP) with a power law intensity function. This prior allows for dependence between the scale factor and the aging rate of the NHPP. The proposed prior has relatively simple closed-form expressions for its moments, facilitating the assessment of prior parameters. The use of this prior in Bayesian estimation is compared to other estimation approaches using Monte Carlo simulation. The results show that Bayesian estimation using the proposed prior generally performs at least as well as either maximum likelihood estimation or Bayesian estimation using independent prior  相似文献   

20.
In latent variable models parameter estimation can be implemented by using the joint or the marginal likelihood, based on independence or conditional independence assumptions. The same dilemma occurs within the Bayesian framework with respect to the estimation of the Bayesian marginal (or integrated) likelihood, which is the main tool for model comparison and averaging. In most cases, the Bayesian marginal likelihood is a high dimensional integral that cannot be computed analytically and a plethora of methods based on Monte Carlo integration (MCI) are used for its estimation. In this work, it is shown that the joint MCI approach makes subtle use of the properties of the adopted model, leading to increased error and bias in finite settings. The sources and the components of the error associated with estimators under the two approaches are identified here and provided in exact forms. Additionally, the effect of the sample covariation on the Monte Carlo estimators is examined. In particular, even under independence assumptions the sample covariance will be close to (but not exactly) zero which surprisingly has a severe effect on the estimated values and their variability. To address this problem, an index of the sample’s divergence from independence is introduced as a multivariate extension of covariance. The implications addressed here are important in the majority of practical problems appearing in Bayesian inference of multi-parameter models with analogous structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号