首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A major recent development in statistics has been the use of fast computational methods of Markov chain Monte Carlo. These procedures allow Bayesian methods to be used in quite complex modelling situations. In this paper, we shall use a range of real data examples involving lapwings, shags, teal, dippers, and herring gulls, to illustrate the power and range of Bayesian techniques. The topics include: prior sensitivity; the use of reversible-jump MCMC for constructing model probabilities and comparing models, with particular reference to models with random effects; model-averaging; and the construction of Bayesian measures of goodness-of-fit. Throughout, there will be discussion of the practical aspects of the work - for instance explaining when and when not to use the BUGS package.  相似文献   

2.
Missing data, a common but challenging issue in most studies, may lead to biased and inefficient inferences if handled inappropriately. As a natural and powerful way for dealing with missing data, Bayesian approach has received much attention in the literature. This paper reviews the recent developments and applications of Bayesian methods for dealing with ignorable and non-ignorable missing data. We firstly introduce missing data mechanisms and Bayesian framework for dealing with missing data, and then introduce missing data models under ignorable and non-ignorable missing data circumstances based on the literature. After that, important issues of Bayesian inference, including prior construction, posterior computation, model comparison and sensitivity analysis, are discussed. Finally, several future issues that deserve further research are summarized and concluded.  相似文献   

3.
This article considers Bayesian estimation methods for categorical data with misclassifications. To adjust for misclassification, double sampling schemes are utilized. Observations are represented in a contingency table categorized by error-free categorical variables and error-prone categorical variables. Posterior means of probabilities in cells are considered as estimates. In some cases, the posterior means can be calculated exactly. However,in some cases, the exact calculation may be too difficult to perform, but we can easily use the expectation-maximiza-tion(EM) algorithm to obtain approximate posterior means.  相似文献   

4.
5.
This review considers current and potential uses of ring-recovery and mark-recapture methods for conservation-oriented research by European ringing schemes. These schemes are concerned mainly with large-scale studies of the demography and movements of widespread species, much of the data being gathered by volunteers. The data holdings and data-gathering potential of the 33 European ringing schemes are outlined. Over 110 million birds have been ringed in Europe giving rise to 1.8 million recoveries. Some 64% of these recoveries are held in the computerized EURING data bank. Passerines comprise 43% of all recoveries and only 15% are of waterfowl. Currently, about 4 million birds are ringed each year and 90 000 recoveries are reported. Ringing effort is much higher in northern and western Europe than in southern and eastern Europe. Most schemes have recorded the ringing and recovery details of birds that have been recovered in computer files but most ringing data are held only on paper. These ringing data must be computerized for rigorous analyses of survival or movements. Without such computerization, future widespread ringing will be of little value. Five key areas of conservation-oriented research using data from European ringing schemes are identified; monitoring, investigating the causes of population declines, impacts of hunting, flyway networks and seabird studies. Current work and future research opportunities in these areas are discussed, and conservation priorities are identified. Demographic studies for monitoring and identifying the causes of declines are developing well, and are likely to be enhanced further by more gathering of mark-recapture data from standardized projects (such as constant effort sites), by improved access to computerized ringing data and by the development of more flexible, user-friendly software for ring-recovery analysis. Interest in studies of movements is reviving, with quantitative methods starting to be applied to population turnover at migratory stop-over sites (Jolly-Seber models) and to movements between sites (multi-state models). Future planned ringing studies will be important for testing ideas about the dynamics of meta-populations. European ringing schemes have the opportunity to enhance greatly their contribution to conservation over the next decade. This will require better access to computerized data and the development of more planned, cooperative projects at European and national scales. The close collaboration of biologists and statisticians in the analysis of previously collected data should be extended to the review of existing sampling strategies and to the development of new projects.  相似文献   

6.
An important practical issue of applying heavy tailed distributions is how to choose the sample fraction or threshold, since only a fraction of upper order statistics can be employed in the inference. Recently, Guillou & Hall ( 2001 ; Journal of Royal Statistical Society B, 63, 293–305) proposed a simple way to choose the threshold in estimating a tail index. In this article, the author first gives an intuitive explanation of the approach in Guillou & Hall ( 2001 ; it Journal of Royal Statistical Society B, 63, 293–305) and then proposes an alternative method, which can be extended to other settings like extreme value index estimation and tail dependence function estimation. Further the author proposes to combine this method for selecting a threshold with a bias reduction estimator to improve the performance of the tail index estimation, interval estimation of a tail index, and high quantile estimation. Simulation studies on both point estimation and interval estimation for a tail index show that both selection procedures are comparable and bias reduction estimation with the threshold selected by either method is preferred. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

7.
This paper deals with the analysis of multivariate survival data from a Bayesian perspective using Markov-chain Monte Carlo methods. The Metropolis along with the Gibbs algorithm is used to calculate some of the marginal posterior distributions. A multivariate survival model is proposed, since survival times within the same group are correlated as a consequence of a frailty random block effect. The conditional proportional-hazards model of Clayton and Cuzick is used with a martingale structured prior process (Arjas and Gasbarra) for the discretized baseline hazard. Besides the calculation of the marginal posterior distributions of the parameters of interest, this paper presents some Bayesian EDA diagnostic techniques to detect model adequacy. The methodology is exemplified with kidney infection data where the times to infections within the same patients are expected to be correlated.  相似文献   

8.
We consider the problem of estimating the size of a closed population based on the results of a certain type of mark-resighting sampling design. The design is similar to the commonly used multiple capture-recapture design, yet in some cases economically more feasible and easy to use. Sampling is done by first tagging a number of randomly selected animals with visible markers and later randomly sighting them (for instance, for large animals by visually sampling from a helicopter) and counting the number of tagged animals. In this paper, we look at Bayesian methods for point and interval estimation of population size for this design. An example involving estimation of mountain sheep, a couple of simulated examples and simulation studies are given to demonstrate the advantages of the proposed procedure over the other available approximate procedures.  相似文献   

9.
Summary.  A typical microarray experiment attempts to ascertain which genes display differential expression in different samples. We model the data by using a two-component mixture model and develop an empirical Bayesian thresholding procedure, which was originally introduced for thresholding wavelet coefficients, as an alternative to the existing methods for determining differential expression across thousands of genes. The method is built on sound theoretical properties and has easy computer implementation in the R statistical package. Furthermore, we consider improvements to the standard empirical Bayesian procedure when replication is present, to increase the robustness and reliability of the method. We provide an introduction to microarrays for those who are unfamilar with the field and the proposed procedure is demonstrated with applications to two-channel complementary DNA microarray experiments.  相似文献   

10.
Statistics and Computing - Bayesian cubature provides a flexible framework for numerical integration, in which a priori knowledge on the integrand can be encoded and exploited. This additional...  相似文献   

11.
Summary A method of inputting prior opinion in contingency tables is described. The method can be used to incorporate beliefs of independence or symmetry but extensions are straightforward. Logistic normal distributions that express such beliefs are used as priors of the cell probabilities and posterior estimates are derived. Empirical Bayes methods are also discussed and approximate posterior variances are provided. The methods are illustrated by a numerical example.  相似文献   

12.
We propose a Bayesian computation and inference method for the Pearson-type chi-squared goodness-of-fit test with right-censored survival data. Our test statistic is derived from the classical Pearson chi-squared test using the differences between the observed and expected counts in the partitioned bins. In the Bayesian paradigm, we generate posterior samples of the model parameter using the Markov chain Monte Carlo procedure. By replacing the maximum likelihood estimator in the quadratic form with a random observation from the posterior distribution of the model parameter, we can easily construct a chi-squared test statistic. The degrees of freedom of the test equal the number of bins and thus is independent of the dimensionality of the underlying parameter vector. The test statistic recovers the conventional Pearson-type chi-squared structure. Moreover, the proposed algorithm circumvents the burden of evaluating the Fisher information matrix, its inverse and the rank of the variance–covariance matrix. We examine the proposed model diagnostic method using simulation studies and illustrate it with a real data set from a prostate cancer study.  相似文献   

13.
Bayesian inference for categorical data analysis   总被引:1,自引:0,他引:1  
This article surveys Bayesian methods for categorical data analysis, with primary emphasis on contingency table analysis. Early innovations were proposed by Good (1953, 1956, 1965) for smoothing proportions in contingency tables and by Lindley (1964) for inference about odds ratios. These approaches primarily used conjugate beta and Dirichlet priors. Altham (1969, 1971) presented Bayesian analogs of small-sample frequentist tests for 2 x 2 tables using such priors. An alternative approach using normal priors for logits received considerable attention in the 1970s by Leonard and others (e.g., Leonard 1972). Adopted usually in a hierarchical form, the logit-normal approach allows greater flexibility and scope for generalization. The 1970s also saw considerable interest in loglinear modeling. The advent of modern computational methods since the mid-1980s has led to a growing literature on fully Bayesian analyses with models for categorical data, with main emphasis on generalized linear models such as logistic regression for binary and multi-category response variables.  相似文献   

14.
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S 1 , … , S k ; random effects can then be a useful model: Si = E(S) + k i . Here, the temporal variation in survival probability is treated as random with average value E( k 2 ) = σ 2 . This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, σ 2 , estimation of E(S) and var(Ê(S)) where the latter includes a component for σ 2 as well as the traditional component for v ar(S|S). Furthermore, the random effects model leads to shrinkage estimates, S i , as improved (in mean square error) estimators of Si compared to the MLE, S i , from the unrestricted time-effects model. Appropriate confidence intervals based on the S i are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of σ 2 , confidence interval coverage on σ 2 , coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: S i = S (no effects), Si = E(S) + k i (random effects), and S 1 , … , S k (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the S i .  相似文献   

15.
Estimates of mean response for a developmental toxicity study are developed using the techniques of Bayesian bootstrap. Using this method, a joint posterior distribution of mean response is simulated, providing a means for determining estimated variance and confidence statements. The approach allows for effects on litter size to be taken into consideration in the estimation of mean response. In addition a method is given for the incorporation of prior information into the analysis. The prior information may be information about mean response and about the litter size distribution as well. Results are compared with likelihood based estimates.  相似文献   

16.
This paper considers quantile regression models using an asymmetric Laplace distribution from a Bayesian point of view. We develop a simple and efficient Gibbs sampling algorithm for fitting the quantile regression model based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the resulting Gibbs sampler can be accomplished by sampling from either normal or generalized inverse Gaussian distribution. We also discuss some possible extensions of our approach, including the incorporation of a scale parameter, the use of double exponential prior, and a Bayesian analysis of Tobit quantile regression. The proposed methods are illustrated by both simulated and real data.  相似文献   

17.
Traditionally, analysis of Hydrology employs only one hydrological variable. Recently, Nadarajah [A bivariate distribution with gamma and beta marginals with application to drought data. J Appl Stat. 2009;36:277–301] proposed a bivariate model with gamma and beta as marginal distributions to analyse the drought duration and the proportion of drought events. However, the validity of this method hinges on fulfilment of stringent assumptions. We propose a robust likelihood approach which can be used to make inference for general bivariate continuous and proportion data. Unlike the gamma–beta (GB) model which is sensitive to model misspecification, the new method provides legitimate inference without knowing the true underlying distribution of the bivariate data. Simulations and the analysis of the drought data from the State of Nebraska, USA, are provided to make contrasts between this robust approach and the GB model.  相似文献   

18.
We consider a continuous-time model for the evolution of social networks. A social network is here conceived as a (di-) graph on a set of vertices, representing actors, and the changes of interest are creation and disappearance over time of (arcs) edges in the graph. Hence we model a collection of random edge indicators that are not, in general, independent. We explicitly model the interdependencies between edge indicators that arise from interaction between social entities. A Markov chain is defined in terms of an embedded chain with holding times and transition probabilities. Data are observed at fixed points in time and hence we are not able to observe the embedded chain directly. Introducing a prior distribution for the parameters we may implement an MCMC algorithm for exploring the posterior distribution of the parameters by simulating the evolution of the embedded process between observations.  相似文献   

19.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

20.
Since the pioneering work by Koenker and Bassett [27], quantile regression models and its applications have become increasingly popular and important for research in many areas. In this paper, a random effects ordinal quantile regression model is proposed for analysis of longitudinal data with ordinal outcome of interest. An efficient Gibbs sampling algorithm was derived for fitting the model to the data based on a location-scale mixture representation of the skewed double-exponential distribution. The proposed approach is illustrated using simulated data and a real data example. This is the first work to discuss quantile regression for analysis of longitudinal data with ordinal outcome.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号