首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigates the Bayesian appeoach to the analysis of parired responess when the responses are categorical. Using resampling and analytical procedures, inferences for homogeneity and agreement are develped. The posterior analysis is based on the Dirichlet distribution from which repeated samples can be geneated with a random number generator. Resampling and analytical techniques are employed to make Bayesian inferences, and when it is not appropriate to use analytical procedures, resampling techniques are easily implemented. Bayesian methodoloogy is illustrated with several examples and the results show that they are exacr-small sample procedures that can easily solve inference problems for matched designs.  相似文献   

2.
In hypothesis testing involving censored lifetime data that are independently distributed according to an accelerated-failure-time model, it is often of interest to predict whether continuation of the experiment will significantly alter the inferences drawn at an interim point. Approaching the problem from a Bayesian viewpoint, we suggest a possible solution based on Laplace approximations to the posterior distribution of the parameters of interest and on Markov-chain Monte Carlo. We apply our results to Weibull data from a carcinogenesis experiment on mice.  相似文献   

3.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

4.
This paper provides a new method and algorithm for making inferences about the parameters of a two-level multivariate normal hierarchical model. One has observed J p -dimensional vector outcomes, distributed at level 1 as multivariate normal with unknown mean vectors and with known covariance matrices. At level 2, the unknown mean vectors also have normal distributions, with common unknown covariance matrix A and with means depending on known covariates and on unknown regression coefficients. The algorithm samples independently from the marginal posterior distribution of A by using rejection procedures. Functions such as posterior means and covariances of the level 1 mean vectors and of the level 2 regression coefficient are estimated by averaging over posterior values calculated conditionally on each value of A drawn. This estimation accounts for the uncertainty in A , unlike standard restricted maximum likelihood empirical Bayes procedures. It is based on independent draws from the exact posterior distributions, unlike Gibbs sampling. The procedure is demonstrated for profiling hospitals based on patients' responses concerning p =2 types of problems (non-surgical and surgical). The frequency operating characteristics of the rule corresponding to a particular vague multivariate prior distribution are shown via simulation to achieve their nominal values in that setting.  相似文献   

5.
The paper proposes a Bayesian quantile regression method for hierarchical linear models. Existing approaches of hierarchical linear quantile regression models are scarce and most of them were not from the perspective of Bayesian thoughts, which is important for hierarchical models. In this paper, based on Bayesian theories and Markov Chain Monte Carlo methods, we introduce Asymmetric Laplace distributed errors to simulate joint posterior distributions of population parameters and across-unit parameters and then derive their posterior quantile inferences. We run a simulation as the proposed method to examine the effects on parameters induced by units and quantile levels; the method is also applied to study the relationship between Chinese rural residents' family annual income and their cultivated areas. Both the simulation and real data analysis indicate that the method is effective and accurate.  相似文献   

6.
It has long been asserted that in univariate location-scale models, when concerned with inference for either the location or scale parameter, the use of the inverse of the scale parameter as a Bayesian prior yields posterior credible sets that have exactly the correct frequentist confidence set interpretation. This claim dates to at least Peers, and has subsequently been noted by various authors, with varying degrees of justification. We present a simple, direct demonstration of the exact matching property of the posterior credible sets derived under use of this prior in the univariate location-scale model. This is done by establishing an equivalence between the conditional frequentist and posterior densities of the pivotal quantities on which conditional frequentist inferences are based.  相似文献   

7.
This paper considers the problem of making statistical inferences about a parameter when a narrow interval centred at a given value of the parameter is considered special, which is interpreted as meaning that there is a substantial degree of prior belief that the true value of the parameter lies in this interval. A clear justification of the practical importance of this problem is provided. The main difficulty with the standard Bayesian solution to this problem is discussed and, as a result, a pseudo-Bayesian solution is put forward based on determining lower limits for the posterior probability of the parameter lying in the special interval by means of a sensitivity analysis. Since it is not assumed that prior beliefs necessarily need to be expressed in terms of prior probabilities, nor that post-data probabilities must be Bayesian posterior probabilities, hybrid methods of inference are also proposed that are based on specific ways of measuring and interpreting the classical concept of significance. The various methods that are outlined are compared and contrasted at both a foundational level, and from a practical viewpoint by applying them to real data from meta-analyses that appeared in a well-known medical article.  相似文献   

8.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

9.
Bayesian Inference Under Partial Prior Information   总被引:1,自引:0,他引:1  
Partial prior information on the marginal distribution of an observable random variable is considered. When this information is incorporated into the statistical analysis of an assumed parametric model, the posterior inference is typically non‐robust so that no inferential conclusion is obtained. To overcome this difficulty a method based on the standard default prior associated to the model and an intrinsic procedure is proposed. Posterior robustness of the resulting inferences is analysed and some illustrative examples are provided.  相似文献   

10.
Bayesian analysis often requires the researcher to employ Markov Chain Monte Carlo (MCMC) techniques to draw samples from a posterior distribution which in turn is used to make inferences. Currently, several approaches to determine convergence of the chain as well as sensitivities of the resulting inferences have been developed. This work develops a Hellinger distance approach to MCMC diagnostics. An approximation to the Hellinger distance between two distributions f and g based on sampling is introduced. This approximation is studied via simulation to determine the accuracy. A criterion for using this Hellinger distance for determining chain convergence is proposed as well as a criterion for sensitivity studies. These criteria are illustrated using a dataset concerning the Anguilla australis, an eel native to New Zealand.  相似文献   

11.
In a previous study, the effect of rounding on classical statistical techniques was considered. Here, we consider how rounded data may affect the posterior distribution and, thus, any Bayesian inferences made. The results in this paper indicate that Bayesian inferences can be sensitive to the roundingprocess.  相似文献   

12.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

13.
A new method is proposed for drawing coherent statistical inferences about a real-valued parameter in problems where there is little or no prior information. Prior ignorance about the parameter is modelled by the set of all continuous probability density functions for which the derivative of the log-density is bounded by a positive constant. This set is translation-invariant, it contains density functions with a wide variety of shapes and tail behaviour, and it generates prior probabilities that are highly imprecise. Statistical inferences can be calculated by solving a simple type of optimal control problem whose general solution is characterized. Detailed results are given for the problems of calculating posterior upper and lower means, variances, distribution functions and probabilities of intervals. In general, posterior upper and lower expectations are achieved by prior density functions that are piecewise exponential. The results are illustrated by normal and binomial examples  相似文献   

14.
ABSTRACT

Both philosophically and in practice, statistics is dominated by frequentist and Bayesian thinking. Under those paradigms, our courses and textbooks talk about the accuracy with which true model parameters are estimated or the posterior probability that they lie in a given set. In nonparametric problems, they talk about convergence to the true function (density, regression, etc.) or the probability that the true function lies in a given set. But the usual paradigms' focus on learning the true model and parameters can distract the analyst from another important task: discovering whether there are many sets of models and parameters that describe the data reasonably well. When we discover many good models we can see in what ways they agree. Points of agreement give us more confidence in our inferences, but points of disagreement give us less. Further, the usual paradigms’ focus seduces us into judging and adopting procedures according to how well they learn the true values. An alternative is to judge models and parameter values, not procedures, and judge them by how well they describe data, not how close they come to the truth. The latter is especially appealing in problems without a true model.  相似文献   

15.
16.
It is shown how various exact non-parametric inferences based on order statistics in one or two random samples can be generalized to situations with progressive type-II censoring, which is a kind of evolutionary right censoring. Ordinary type-II right censoring is a special case of such progressive censoring. These inferences include confidence intervals for a given parent quantile, prediction intervals for a given order statistic of a future sample, and related two-sample inferences based on exceedance probabilities. The proposed inferences are valid for any parent distribution with continuous distribution function. The key result is that each observable uncensored order statistic that becomes available with progressive type-II censoring can be represented as a mixture with known weights of underlying ordinary order statistics. The importance of this mixture representation lies in that various properties of such observable order statistics can be deduced immediately from well-known properties of ordinary order statistics.  相似文献   

17.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

18.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   

19.
Missing data, a common but challenging issue in most studies, may lead to biased and inefficient inferences if handled inappropriately. As a natural and powerful way for dealing with missing data, Bayesian approach has received much attention in the literature. This paper reviews the recent developments and applications of Bayesian methods for dealing with ignorable and non-ignorable missing data. We firstly introduce missing data mechanisms and Bayesian framework for dealing with missing data, and then introduce missing data models under ignorable and non-ignorable missing data circumstances based on the literature. After that, important issues of Bayesian inference, including prior construction, posterior computation, model comparison and sensitivity analysis, are discussed. Finally, several future issues that deserve further research are summarized and concluded.  相似文献   

20.
A hybrid censoring is a mixture of Type-I and Type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are hybrid censored. The maximum likelihood estimators (MLEs) and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the MLEs are used to construct approximate confidence intervals. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are obtained under suitable priors on the unknown parameters and using the Gibbs sampling procedure. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. Monte Carlo simulations are performed to compare the performances of the different methods and one data set is analyzed for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号