首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   

2.
This paper provides methods of obtaining Bayesian D-optimal Accelerated Life Test (ALT) plans for series systems with independent exponential component lives under the Type-I censoring scheme. Two different Bayesian D-optimality design criteria are considered. For both the criteria, first optimal designs for a given number of experimental points are found by solving a finite-dimensional constrained optimization problem. Next, the global optimality of such an ALT plan is ensured by applying the General Equivalence Theorem. A detailed sensitivity analysis is also carried out to investigate the effect of different planning inputs on the resulting optimal ALT plans. Furthermore, these Bayesian optimal plans are also compared with the corresponding (frequentist) locally D-optimal ALT plans.  相似文献   

3.
Sinh-normal/independent distributions are a class of symmetric heavy-tailed distributions that include the sinh-normal distribution as a special case, which has been used extensively in Birnbaum–Saunders regression models. Here, we explore the use of Markov Chain Monte Carlo methods to develop a Bayesian analysis in nonlinear regression models when Sinh-normal/independent distributions are assumed for the random errors term, and it provides a robust alternative to the sinh-normal nonlinear regression model. Bayesian mechanisms for parameter estimation, residual analysis and influence diagnostics are then developed, which extend the results of Farias and Lemonte [Bayesian inference for the Birnbaum-Saunders nonlinear regression model, Stat. Methods Appl. 20 (2011), pp. 423-438] who used the Sinh-normal/independent distributions with known scale parameter. Some special cases, based on the sinh-Student-t (sinh-St), sinh-slash (sinh-SL) and sinh-contaminated normal (sinh-CN) distributions are discussed in detail. Two real datasets are finally analyzed to illustrate the developed procedures.  相似文献   

4.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

5.
A Bayesian model consists of two elements: a sampling model and a prior density. The problem of selecting a prior density is nothing but the problem of selecting a Bayesian model where the sampling model is fixed. A predictive approach is used through a decision problem where the loss function is the squared L 2 distance between the sampling density and the posterior predictive density, because the aim of the method is to choose the prior that provides a posterior predictive density as good as possible. An algorithm is developed for solving the problem; this algorithm is based on Lavine's linearization technique.  相似文献   

6.
This paper sets out to implement the Bayesian paradigm for fractional polynomial models under the assumption of normally distributed error terms. Fractional polynomials widen the class of ordinary polynomials and offer an additive and transportable modelling approach. The methodology is based on a Bayesian linear model with a quasi-default hyper-g prior and combines variable selection with parametric modelling of additive effects. A Markov chain Monte Carlo algorithm for the exploration of the model space is presented. This theoretically well-founded stochastic search constitutes a substantial improvement to ad hoc stepwise procedures for the fitting of fractional polynomial models. The method is applied to a data set on the relationship between ozone levels and meteorological parameters, previously analysed in the literature.  相似文献   

7.
Combined Bayesian estimates for equicorrelation covariance matrices are considered. The case of a common equicorrelation p and possibly different standard deviations σlk among k experimental groups is examined first, and the Bayesian estimation of (σ, σ1k) is discussed. Secondly, under the assumption of a common standard deviation and possibly different equicorrelations, the Bayesian estimation of (ρ1k,σ) is considered.  相似文献   

8.
This paper gives an exposition of the use of the posterior likelihood ratio for testing point null hypotheses in a fully Bayesian framework. Connections between the frequentist P-value and the posterior distribution of the likelihood ratio are used to interpret and calibrate P-values in a Bayesian context, and examples are given to show the use of simple posterior simulation methods to provide Bayesian tests of common hypotheses.  相似文献   

9.
Dealing with incomplete data is a pervasive problem in statistical surveys. Bayesian networks have been recently used in missing data imputation. In this research, we propose a new methodology for the multivariate imputation of missing data using discrete Bayesian networks and conditional Gaussian Bayesian networks. Results from imputing missing values in coronary artery disease data set and milk composition data set as well as a simulation study from cancer-neapolitan network are presented to demonstrate and compare the performance of three Bayesian network-based imputation methods with those of multivariate imputation by chained equations (MICE) and the classical hot-deck imputation method. To assess the effect of the structure learning algorithm on the performance of the Bayesian network-based methods, two methods called Peter-Clark algorithm and greedy search-and-score have been applied. Bayesian network-based methods are: first, the method introduced by Di Zio et al. [Bayesian networks for imputation, J. R. Stat. Soc. Ser. A 167 (2004), 309–322] in which, each missing item of a variable is imputed using the information given in the parents of that variable; second, the method of Di Zio et al. [Multivariate techniques for imputation based on Bayesian networks, Neural Netw. World 15 (2005), 303–310] which uses the information in the Markov blanket set of the variable to be imputed and finally, our new proposed method which applies the whole available knowledge of all variables of interest, consisting the Markov blanket and so the parent set, to impute a missing item. Results indicate the high quality of our new proposed method especially in the presence of high missingness percentages and more connected networks. Also the new method have shown to be more efficient than the MICE method for small sample sizes with high missing rates.  相似文献   

10.
ABSTRACT

The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first population is the same as in the second. This posterior probability is compared with the p-value of the classical method, obtaining a reconciliation between both results, classical and Bayesian. The obtained results are generalized for r × s tables.  相似文献   

11.
A Bayesian approach is considered for the interval estimation of a binomial proportion in doubly sampled data. The coverage probability and the expected width of the Bayesian confidence interval are compared with likelihood-related confidence intervals. It is shown that a hierarchical Bayesian approach provides relatively simple and effective confidence intervals. In addition, it is shown that Agresti–Coull type confidence interval, discussed by  Lee and Choi (2009), can be justified by the Bayesian framework.  相似文献   

12.
We can use wavelet shrinkage to estimate a possibly multivariate regression function g under the general regression setup, y = g + ε. We propose an enhanced wavelet-based denoising methodology based on Bayesian adaptive multiresolution shrinkage, an effective Bayesian shrinkage rule in addition to the semi-supervised learning mechanism. The Bayesian shrinkage rule is advanced by utilizing the semi-supervised learning method in which the neighboring structure of a wavelet coefficient is adopted and an appropriate decision function is derived. According to decision function, wavelet coefficients follow one of two prespecified Bayesian rules obtained using varying related parameters. The decision of a wavelet coefficient depends not only on its magnitude, but also on the neighboring structure on which the coefficient is located. We discuss the theoretical properties of the suggested method and provide recommended parameter settings. We show that the proposed method is often superior to several existing wavelet denoising methods through extensive experimentation.  相似文献   

13.
Summary This paper introduces a Bayesian nonparametric estimator for an unknown distribution function based on left censored observations. Hjort (1990)/Lo (1993) introduced Bayesian nonparametric estimators derived from beta/beta-neutral processes which allow for right censoring. These processes are taken as priors from the class ofneutral to the right processes (Doksum, 1974). The Kaplan-Meier nonparametric product limit estimator can be obtained from these Bayesian nonparametric estimators in the limiting case of a vague prior. The present paper introduces what can be seen as the correspondingleft beta/beta-neutral process prior which allow for left censoring. The Bayesian nonparametyric estimator is obtained as in the corresponding product limit estimator based on left censored data.  相似文献   

14.
In this article, Bayesian inference for the half-normal and half-t distributions using uninformative priors is considered. It is shown that exact Bayesian inference can be undertaken for the half-normal distribution without the need for Gibbs sampling. Simulation is then used to compare the sampling properties of Bayesian point and interval estimators with those of their maximum likelihood based counterparts. Inference for the half-t distribution based on the use of Gibbs sampling is outlined, and an approach to model comparison based on the use of Bayes factors is discussed. The fitting of the half-normal and half-t models is illustrated using real data on the body fat measurements of elite athletes.  相似文献   

15.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

16.
In this article, the problem of parameter estimation and variable selection in the Tobit quantile regression model is considered. A Tobit quantile regression with the elastic net penalty from a Bayesian perspective is proposed. Independent gamma priors are put on the l1 norm penalty parameters. A novel aspect of the Bayesian elastic net Tobit quantile regression is to treat the hyperparameters of the gamma priors as unknowns and let the data estimate them along with other parameters. A Bayesian Tobit quantile regression with the adaptive elastic net penalty is also proposed. The Gibbs sampling computational technique is adapted to simulate the parameters from the posterior distributions. The proposed methods are demonstrated by both simulated and real data examples.  相似文献   

17.
We propose a Bayesian nonparametric instrumental variable approach under additive separability that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specification with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV. Supplementary materials for this article are available online.  相似文献   

18.
Modelling of HIV dynamics in AIDS research has greatly improved our understanding of the pathogenesis of HIV-1 infection and guided for the treatment of AIDS patients and evaluation of antiretroviral therapies. Some of the model parameters may have practical meanings with prior knowledge available, but others might not have prior knowledge. Incorporating priors can improve the statistical inference. Although there have been extensive Bayesian and frequentist estimation methods for the viral dynamic models, little work has been done on making simultaneous inference about the Bayesian and frequentist parameters. In this article, we propose a hybrid Bayesian inference approach for viral dynamic nonlinear mixed-effects models using the Bayesian frequentist hybrid theory developed in Yuan [Bayesian frequentist hybrid inference, Ann. Statist. 37 (2009), pp. 2458–2501]. Compared with frequentist inference in a real example and two simulation examples, the hybrid Bayesian approach is able to improve the inference accuracy without compromising the computational load.  相似文献   

19.
Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer theoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the à trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant à trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency-based cross-talk between amygdala and medial orbito-frontal cortex.  相似文献   

20.
In this paper, we consider the Bayesian analysis of binary time series with different priors, namely normal, Students' t, and Jeffreys prior, and compare the results with the frequentist methods through some simulation experiments and one real data on daily rainfall in inches at Mount Washington, NH. Among Bayesian methods, our results show that the Jeffreys prior perform better in most of the situations for both the simulation and the rainfall data. Furthermore, among weakly informative priors considered, Student's t prior with 7 degrees of freedom fits the data most adequately.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号