首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modern theory for statistical hypothesis testing can broadly be classified as Bayesian or frequentist. Unfortunately, one can reach divergent conclusions if Bayesian and frequentist approaches are applied in parallel to analyze the same data set. This is a serious impasse since there is a lack of consensus on when to use one approach in detriment of the other. However, this conflict can be resolved. The present paper shows the existence of a perfect equivalence between Bayesian and frequentist methods for testing. Hence, Bayesian and frequentist decision rules can always be calibrated, in both directions, in order to present concordant results.  相似文献   

2.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

3.
This article addresses the various properties and different methods of estimation of the unknown parameter of length and area-biased Maxwell distributions. Although, our main focus is on estimation from both frequentist and Bayesian point of view, yet, various mathematical and statistical properties of length and area-biased Maxwell distributions (such as moments, moment-generating function (mgf), hazard rate function, mean residual lifetime function, residual lifetime function, reversed residual life function, conditional moments and conditional mgf, stochastic ordering, and measures of uncertainty) are derived. We briefly describe different frequentist approaches, namely, maximum likelihood estimator, moments estimator, least-square and weighted least-square estimators, maximum product of spacings estimator and compare them using extensive numerical simulations. Next we consider Bayes estimation under different types of loss function (symmetric and asymmetric loss functions) using inverted gamma prior for the scale parameter. Furthermore, Bayes estimators and their respective posterior risks are computed and compared using Markov chain Monte Carlo (MCMC) algorithm. Also, bootstrap confidence intervals using frequentist approaches are provided to compare with Bayes credible intervals. Finally, a real dataset has been analyzed for illustrative purposes.  相似文献   

4.
A density estimation method in a Bayesian nonparametric framework is presented when recorded data are not coming directly from the distribution of interest, but from a length biased version. From a Bayesian perspective, efforts to computationally evaluate posterior quantities conditionally on length biased data were hindered by the inability to circumvent the problem of a normalizing constant. In this article, we present a novel Bayesian nonparametric approach to the length bias sampling problem that circumvents the issue of the normalizing constant. Numerical illustrations as well as a real data example are presented and the estimator is compared against its frequentist counterpart, the kernel density estimator for indirect data of Jones.  相似文献   

5.
This article mainly considers interval estimation of the scale and shape parameters of the generalized exponential (GE) distribution. We adopt the generalized fiducial method to construct a kind of new confidence intervals for the parameters of interest and compare them with the frequentist and Bayesian methods. In addition, we give the comparison of the point estimation based on the frequentist, generalized fiducial and Bayesian methods. Simulation results show that a new procedure based on generalized fiducial inference is more applicable than the non-fiducial methods for the point and interval estimation of the GE distribution. Finally, two lifetime data sets are used to illustrate the application of our new procedure.  相似文献   

6.
In many studies a large number of variables is measured and the identification of relevant variables influencing an outcome is an important task. For variable selection several procedures are available. However, focusing on one model only neglects that there usually exist other equally appropriate models. Bayesian or frequentist model averaging approaches have been proposed to improve the development of a predictor. With a larger number of variables (say more than ten variables) the resulting class of models can be very large. For Bayesian model averaging Occam’s window is a popular approach to reduce the model space. As this approach may not eliminate any variables, a variable screening step was proposed for a frequentist model averaging procedure. Based on the results of selected models in bootstrap samples, variables are eliminated before deriving a model averaging predictor. As a simple alternative screening procedure backward elimination can be used. Through two examples and by means of simulation we investigate some properties of the screening step. In the simulation study we consider situations with fifteen and 25 variables, respectively, of which seven have an influence on the outcome. With the screening step most of the uninfluential variables will be eliminated, but also some variables with a weak effect. Variable screening leads to more applicable models without eliminating models, which are more strongly supported by the data. Furthermore, we give recommendations for important parameters of the screening step.  相似文献   

7.
The likelihood function is often used for parameter estimation. Its use, however, may cause difficulties in specific situations. In order to circumvent these difficulties, we propose a parameter estimation method based on the replacement of the likelihood in the formula of the Bayesian posterior distribution by a function which depends on a contrast measuring the discrepancy between observed data and a parametric model. The properties of the contrast-based (CB) posterior distribution are studied to understand what the consequences of incorporating a contrast in the Bayes formula are. We show that the CB-posterior distribution can be used to make frequentist inference and to assess the asymptotic variance matrix of the estimator with limited analytical calculations compared to the classical contrast approach. Even if the primary focus of this paper is on frequentist estimation, it is shown that for specific contrasts the CB-posterior distribution can be used to make inference in the Bayesian way.The method was used to estimate the parameters of a variogram (simulated data), a Markovian model (simulated data) and a cylinder-based autosimilar model describing soil roughness (real data). Even if the method is presented in the spatial statistics perspective, it can be applied to non-spatial data.  相似文献   

8.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

9.
Frequentist and Bayesian methods differ in many aspects but share some basic optimal properties. In real-life prediction problems, situations exist in which a model based on one of the above paradigms is preferable depending on some subjective criteria. Nonparametric classification and regression techniques, such as decision trees and neural networks, have both frequentist (classification and regression trees (CARTs) and artificial neural networks) as well as Bayesian counterparts (Bayesian CART and Bayesian neural networks) to learning from data. In this paper, we present two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which we call the Bayesian neural tree (BNT) models. BNT models can simultaneously perform feature selection and prediction, are highly flexible, and generalise well in settings with limited training observations. We study the statistical consistency of the proposed approaches and derive the optimal value of a vital model parameter. The excellent performance of the newly proposed BNT models is shown using simulation studies. We also provide some illustrative examples using a wide variety of standard regression datasets from a public available machine learning repository to show the superiority of the proposed models in comparison to popularly used Bayesian CART and Bayesian neural network models.  相似文献   

10.
In the frailty Cox model, frequentist approaches often present problems of numerical resolution, convergence, and variance calculation. The Bayesian approach offers an alternative. The goal of this study was to compare, using real (calf gastroenteritis) and simulated data, the results obtained with the MCMC method used in the Bayesian approach versus two frequentist approaches: the Newton–Raphson algorithm to solve a penalized likelihood and the EM algorithm. The results obtained showed that when the number of groups in the population decreases, the Bayesian approach gives a less biased estimation of the frailty variance and of the group fixed effect than the frequentist approaches.  相似文献   

11.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

12.
The mixed random effect model is commonly used in longitudinal data analysis within either frequentist or Bayesian framework. Here we consider a case, in which we have prior knowledge on partial parameters, while no such information on the rest of the parameters. Thus, we use the hybrid approach on the random-effects model with partial parameters. The parameters are estimated via Bayesian procedure, and the rest of parameters by the frequentist maximum likelihood estimation (MLE), simultaneously on the same model. In practice, we often know partial prior information such as, covariates of age, gender, etc. These information can be used, and accurate estimations in mixed random-effects model can be obtained. A series of simulation studies were performed to compare the results with the commonly used random-effects model with and without partial prior information. The results in hybrid estimation (HYB) and MLE were very close to each other. The estimated θ values in with partial prior information model (HYB) were more closer to true θ values, and showed less variances than without partial prior information in MLE. To compare with true θ values, the mean square of errors are much less in HYB than in MLE. This advantage of HYB is very obvious in longitudinal data with a small sample size. The methods of HYB and MLE are applied to a real longitudinal data for illustration purposes.  相似文献   

13.
ABSTRACT

We propose a generalization of the one-dimensional Jeffreys' rule in order to obtain non informative prior distributions for non regular models, taking into account the comments made by Jeffreys in his article of 1946. These non informatives are parameterization invariant and the Bayesian intervals have good behavior in frequentist inference. In some important cases, we can generate non informative distributions for multi-parameter models with non regular parameters. In non regular models, the Bayesian method offers a satisfactory solution to the inference problem and also avoids the problem that the maximum likelihood estimator has with these models. Finally, we obtain non informative distributions in job-search and deterministic frontier production homogenous models.  相似文献   

14.
Structural equation models (SEM) have been extensively used in behavioral, social, and psychological research to model relations between the latent variables and the observations. Most software packages for the fitting of SEM rely on frequentist methods. Traditional models and software are not appropriate for analysis of the dependent observations such as time-series data. In this study, a structural equation model with a time series feature is introduced. A Bayesian approach is used to solve the model with the aid of the Markov chain Monte Carlo method. Bayesian inferences as well as prediction with the proposed time series structural equation model can also reveal certain unobserved relationships among the observations. The approach is successfully employed using real Asian, American and European stock return data.  相似文献   

15.
In this paper, we consider the Bayesian analysis of binary time series with different priors, namely normal, Students' t, and Jeffreys prior, and compare the results with the frequentist methods through some simulation experiments and one real data on daily rainfall in inches at Mount Washington, NH. Among Bayesian methods, our results show that the Jeffreys prior perform better in most of the situations for both the simulation and the rainfall data. Furthermore, among weakly informative priors considered, Student's t prior with 7 degrees of freedom fits the data most adequately.  相似文献   

16.
Prediction limits for Poisson distribution are useful in real life when predicting the occurrences of some phenomena, for example, the number of infections from a disease per year among school children, or the number of hospitalizations per year among patients with cardiovascular disease. In order to allocate the right resources and to estimate the associated cost, one would want to know the worst (i.e., an upper limit) and the best (i.e., the lower limit) scenarios. Under the Poisson distribution, we construct the optimal frequentist and Bayesian prediction limits, and assess frequentist properties of the Bayesian prediction limits. We show that Bayesian upper prediction limit derived from uniform prior distribution and Bayesian lower prediction limit derived from modified Jeffreys non informative prior coincide with their respective frequentist limits. This is not the case for the Bayesian lower prediction limit derived from a uniform prior and the Bayesian upper prediction limit derived from a modified Jeffreys prior distribution. Furthermore, it is shown that not all Bayesian prediction limits derived from a proper prior can be interpreted in a frequentist context. Using a counterexample, we state a sufficient condition and show that Bayesian prediction limits derived from proper priors satisfying our condition cannot be interpreted in a frequentist context. Analysis of simulated data and data on Atlantic tropical storm occurrences are presented.  相似文献   

17.
We formulate Bayesian approaches to the problems of determining the required sample size for Bayesian interval estimators of a predetermined length for a single Poisson rate, for the difference between two Poisson rates, and for the ratio of two Poisson rates. We demonstrate the efficacy of our Bayesian-based sample-size determination method with two real-data quality-control examples and compare the results to frequentist sample-size determination methods.  相似文献   

18.
This article presents a nonparametric Bayesian procedure for estimating a survival curve in a proportional hazard model when some of the data are censored on the left and some are censored on the right. The method works under the assumption that there is a Dirichlet process prior knowledge on the observable variable. Strong consistency of the estimator is proved and an example is given. To finish some simulation is presented to analyze the estimator.  相似文献   

19.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Bayesian Survival Analysis Using Bernstein Polynomials   总被引:1,自引:0,他引:1  
Abstract.  Bayesian survival analysis of right-censored survival data is studied using priors on Bernstein polynomials and Markov chain Monte Carlo methods. These priors easily take into consideration geometric information like convexity or initial guess on the cumulative hazard functions, select only smooth functions, can have large enough support, and can be easily specified and generated. Certain frequentist asymptotic properties of the posterior distribution are established. Simulation studies indicate that these Bayes methods are quite satisfactory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号