首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Starting with a decision theoretic formulation of simultaneous testing of null hypotheses against two-sided alternatives, a procedure controlling the Bayesian directional false discovery rate (BDFDR) is developed through controlling the posterior directional false discovery rate (PDFDR). This is an alternative to Lewis and Thayer [2004. A loss function related to the FDR for random effects multiple comparison. J. Statist. Plann. Inference 125, 49–58.] with a better control of the BDFDR. Moreover, it is optimum in the sense of being the non-randomized part of the procedure maximizing the posterior expectation of the directional per-comparison power rate given the data, while controlling the PDFDR. A corresponding empirical Bayes method is proposed in the context of one-way random effects model. Simulation study shows that the proposed Bayes and empirical Bayes methods perform much better from a Bayesian perspective than the procedures available in the literature.  相似文献   

2.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

3.
We exploit Bayesian criteria for designing M/M/c//rM/M/c//r queueing systems with spares. For illustration of our approach we use a real problem from aeronautic maintenance, where the numbers of repair crews and spare planes must be sufficiently large to meet the necessary operational capacity. Bayesian guarantees for this to happen can be given using predictive or posterior distributions.  相似文献   

4.
The purpose of this paper is to develop a Bayesian analysis for the zero-inflated hyper-Poisson model. Markov chain Monte Carlo methods are used to develop a Bayesian procedure for the model and the Bayes estimators are compared by simulation with the maximum-likelihood estimators. Regression modeling and model selection are also discussed and case deletion influence diagnostics are developed for the joint posterior distribution based on the functional Bregman divergence, which includes ψ-divergence and several others, divergence measures, such as the Itakura–Saito, Kullback–Leibler, and χ2 divergence measures. Performance of our approach is illustrated in artificial, real apple cultivation experiment data, related to apple cultivation.  相似文献   

5.
We consider a continuous-time model for the evolution of social networks. A social network is here conceived as a (di-) graph on a set of vertices, representing actors, and the changes of interest are creation and disappearance over time of (arcs) edges in the graph. Hence we model a collection of random edge indicators that are not, in general, independent. We explicitly model the interdependencies between edge indicators that arise from interaction between social entities. A Markov chain is defined in terms of an embedded chain with holding times and transition probabilities. Data are observed at fixed points in time and hence we are not able to observe the embedded chain directly. Introducing a prior distribution for the parameters we may implement an MCMC algorithm for exploring the posterior distribution of the parameters by simulating the evolution of the embedded process between observations.  相似文献   

6.
A model for directional data in q dimensions is studied. The data are assumed to arise from a distribution with a density on a sphere of q — 1 dimensions. The density is unimodal and rotationally symmetric, but otherwise of unknown form. The posterior distribution of the unknown mode (mean direction) is derived, and small-sample posterior inference is discussed. The posterior mean of the density is also given. A numerical method for evaluating posterior quantities based on sampling a Markov chain is introduced. This method is generally applicable to problems involving unknown monotone functions.  相似文献   

7.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

8.
A multivariate frailty model in which survival function depends on baseline distributions of components and the frailty random variable is considered. Since misspecification in choice of frailty distribution and/or baseline distribution may affect the distribution of multivariate frailty model, using theory of stochastic orders, we compare multivariate frailty models arising from different choices of frailty distribution.  相似文献   

9.
In this article, an importance sampling (IS) method for the posterior expectation of a non linear function in a Bayesian vector autoregressive (VAR) model is developed. Most Bayesian inference problems involve the evaluation of the expectation of a function of interest, usually a non linear function of the model parameters, under the posterior distribution. Non linear functions in Bayesian VAR setting are difficult to estimate and usually require numerical methods for their evaluation. A weighted IS estimator is used for the evaluation of the posterior expectation. With the cross-entropy (CE) approach, the IS density is chosen from a specified family of densities such that the CE distance or the Kullback–Leibler divergence between the optimal IS density and the importance density is minimal. The performance of the proposed algorithm is assessed in an iterated multistep forecasting of US macroeconomic time series.  相似文献   

10.
In this article two-stage hierarchical Bayesian models are used for the observed occurrences of events in a rectangular region. Two Bayesian variable window scan statistics are introduced to test the null hypothesis that the observed events follow a specified two-stage hierarchical model vs an alternative that indicates a local increase in the average number of observed events in a subregion (clustering). Both procedures are based on a sequence of Bayes factors and their pp-values that have been generated via simulation of posterior samples of the parameters, under the null and alternative hypotheses. The posterior samples of the parameters have been generated by employing Gibbs sampling via introduction of auxiliary variables. Numerical results are presented to evaluate the performance of these variable window scan statistics.  相似文献   

11.
When analyzing incomplete longitudinal clinical trial data, it is often inappropriate to assume that the occurrence of missingness is at random, especially in cases where visits are entirely missed. We present a framework that simultaneously models multivariate incomplete longitudinal data and a non-ignorable missingness mechanism using a Bayesian approach. A criterion measure is presented for comparing models. We demonstrate the feasibility of the methodology through reanalysis of two of the longitudinal measures from a clinical trial of penicillamine treatment for scleroderma patients. We compare the results for univariate and bivariate, ignorable and non-ignorable missingness models.  相似文献   

12.
This paper investigates statistical issues that arise in interlaboratory studies known as Key Comparisons when one has to link several comparisons to or through existing studies. An approach to the analysis of such a data is proposed using Gaussian distributions with heterogeneous variances. We develop conditions for the set of sufficient statistics to be complete and for the uniqueness of uniformly minimum variance unbiased estimators (UMVUE) of the contrast parametric functions. New procedures are derived for estimating these functions with estimates of their uncertainty. These estimates lead to associated confidence intervals for the laboratories (or studies) contrasts. Several examples demonstrate statistical inference for contrasts based on linkage through the pilot laboratories. Monte Carlo simulation results on performance of approximate confidence intervals are also reported.  相似文献   

13.
There are two different systems of contrast parameterization when analyzing the interaction effects among the factors with more than two levels, i.e., linear-quadratic system and orthogonal components system. Based on the former system and an ANOVA model, Xu and Wu (2001) introduced the generalized wordlength pattern for general factorial designs. This paper shows that the generalized wordlength pattern exactly measures the balance pattern of interaction columns of a symmetrical design ground on the orthogonal components system, and thus an alternative angle to look at the generalized minimum aberration criterion is given. This work is partially supported by NNSF of China grant No. 10231030.  相似文献   

14.
The author extends to the Bayesian nonparametric context the multinomial goodness‐of‐fit tests due to Cressie & Read (1984). Her approach is suitable when the model of interest is a discrete distribution. She provides an explicit form for the tests, which are based on power‐divergence measures between a prior Dirichlet process that is highly concentrated around the model of interest and the corresponding posterior Dirichlet process. In addition to providing interesting special cases and useful approximations, she discusses calibration and the choice of test through examples.  相似文献   

15.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
In this paper we propose a general cure rate aging model. Our approach enables different underlying activation mechanisms which lead to the event of interest. The number of competing causes of the event of interest is assumed to follow a logarithmic distribution. The model is parameterized in terms of the cured fraction which is then linked to covariates. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis for the proposed model. Moreover, some discussions on the model selection to compare the fitted models are given, as well as case deletion influence diagnostics are developed for the joint posterior distribution based on the ψ-divergence, which has several divergence measures as particular cases, such as the Kullback–Leibler (K-L), J-distance, L1 norm, and χ2-square divergence measures. Simulation studies are performed and experimental results are illustrated based on a real malignant melanoma data.  相似文献   

17.
We introduce the Hausdorff αα-entropy to study the strong Hellinger consistency of posterior distributions. We obtain general Bayesian consistency theorems which extend the well-known results of Barron et al. [1999. The consistency of posterior distributions in nonparametric problems. Ann. Statist. 27, 536–561] and Ghosal et al. [1999. Posterior consistency of Dirichlet mixtures in density estimation. Ann. Statist. 27, 143–158] and Walker [2004. New approaches to Bayesian consistency. Ann. Statist. 32, 2028–2043]. As an application we strengthen previous results on Bayesian consistency of the (normal) mixture models.  相似文献   

18.
Equivalent factorial designs have identical statistical properties for estimation of factorial contrasts and for model fitting. Non-equivalent designs, however, may have the same statistical properties under one particular model but different properties under a different model. In this paper, we describe known methods for the determination of equivalence or non-equivalence of two-level factorial designs, whether they be regular factorial designs, non-regular orthogonal arrays, or have no particular structure. In addition, we evaluate a number of potential fast screening methods for detecting non-equivalence of designs. Although the paper concentrates mainly on symmetric designs with factors at two levels, we also evaluate methods of determining combinatorial equivalence and non-equivalence of three-level designs and indicate extensions to larger numbers of levels and to asymmetric designs.  相似文献   

19.
Ranked set sampling (RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Res. 3, 385–390] as an effective way to estimate the unknown population mean. Chuiv and Sinha [1998. On some aspects of ranked set sampling in parametric estimation. In: Balakrishnan, N., Rao, C.R. (Eds.), Handbook of Statistics, vol. 17. Elsevier, Amsterdam, pp. 337–377] and Chen et al. [2004. Ranked Set Sampling—Theory and Application. Lecture Notes in Statistics, vol. 176. Springer, New York] have provided excellent surveys of RSS and various inferential results based on RSS. In this paper, we use the idea of order statistics from independent and non-identically distributed (INID) random variables to propose ordered ranked set sampling (ORSS) and then develop optimal linear inference based on ORSS. We determine the best linear unbiased estimators based on ORSS (BLUE-ORSS) and show that they are more efficient than BLUE-RSS for the two-parameter exponential, normal and logistic distributions. Although this is not the case for the one-parameter exponential distribution, the relative efficiency of the BLUE-ORSS (to BLUE-RSS) is very close to 1. Furthermore, we compare both BLUE-ORSS and BLUE-RSS with the BLUE based on order statistics from a simple random sample (BLUE-OS). We show that BLUE-ORSS is uniformly better than BLUE-OS, while BLUE-RSS is not as efficient as BLUE-OS for small sample sizes (n<5n<5).  相似文献   

20.
Pairwise comparisons for proportions estimated by pooled testing   总被引:1,自引:0,他引:1  
When estimating the prevalence of a rare trait, pooled testing can confer substantial benefits when compared to individual testing. In addition to screening experiments for infectious diseases in humans, pooled testing has also been exploited in other applications such as drug testing, epidemiological studies involving animal disease, plant disease assessment, and screening for rare genetic mutations. Within a pooled-testing context, we consider situations wherein different strata or treatments are to be compared with the goals of assessing significant and practical differences between strata and ranking strata in terms of prevalence. To achieve these goals, we first present two simultaneous pairwise interval estimation procedures for use with pooled data. Our procedures rely on asymptotic results, so we investigate small-sample behavior and compare the two procedures in terms of simultaneous coverage probability and mean interval length. We then present a unified approach to determine pool sizes which deliver desired coverage properties while taking testing costs and interval precision into account. We illustrate our methods using data from an observational HIV study involving heterosexual males who use intravenous drugs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号