首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
We study the association between bone mineral density (BMD) and body mass index (BMI) when contingency tables are constructed from the several U.S. counties, where BMD has three levels (normal, osteopenia and osteoporosis) and BMI has four levels (underweight, normal, overweight and obese). We use the Bayes factor (posterior odds divided by prior odds or equivalently the ratio of the marginal likelihoods) to construct the new test. Like the chi-squared test and Fisher's exact test, we have a direct Bayes test which is a standard test using data from each county. In our main contribution, for each county techniques of small area estimation are used to borrow strength across counties and a pooled test of independence of BMD and BMI is obtained using a hierarchical Bayesian model. Our pooled Bayes test is computed by performing a Monte Carlo integration using random samples rather than Gibbs samples. We have seen important differences among the pooled Bayes test, direct Bayes test and the Cressie-Read test that allows for some degree of sparseness, when the degree of evidence against independence is studied. As expected, we also found that the direct Bayes test is sensitive to the prior specifications but the pooled Bayes test is not so sensitive. Moreover, the pooled Bayes test has competitive power properties, and it is superior when the cell counts are small to moderate.  相似文献   

2.
We consider Bayesian testing for independence of two categorical variables with covariates for a two-stage cluster sample. This is a difficult problem because we have a complex sample (i.e. cluster sample), not a simple random sample. Our approach is to convert the cluster sample with covariates into an equivalent simple random sample without covariates, which provides a surrogate of the original sample. Then, this surrogate sample is used to compute the Bayes factor to make an inference about independence. We apply our methodology to the data from the Trend in International Mathematics and Science Study [30] for fourth grade US students to assess the association between the mathematics and science scores represented as categorical variables. We show that if there is strong association between two categorical variables, there is no significant difference between the tests with and without the covariates. We also performed a simulation study to further understand the effect of covariates in various situations. We found that for borderline cases (moderate association between the two categorical variables), there are noticeable differences in the test with and without covariates.  相似文献   

3.
We propose a Bayesian computation and inference method for the Pearson-type chi-squared goodness-of-fit test with right-censored survival data. Our test statistic is derived from the classical Pearson chi-squared test using the differences between the observed and expected counts in the partitioned bins. In the Bayesian paradigm, we generate posterior samples of the model parameter using the Markov chain Monte Carlo procedure. By replacing the maximum likelihood estimator in the quadratic form with a random observation from the posterior distribution of the model parameter, we can easily construct a chi-squared test statistic. The degrees of freedom of the test equal the number of bins and thus is independent of the dimensionality of the underlying parameter vector. The test statistic recovers the conventional Pearson-type chi-squared structure. Moreover, the proposed algorithm circumvents the burden of evaluating the Fisher information matrix, its inverse and the rank of the variance–covariance matrix. We examine the proposed model diagnostic method using simulation studies and illustrate it with a real data set from a prostate cancer study.  相似文献   

4.
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119–130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model.  相似文献   

5.
Bayes credibility limits for small proportions from stratified and fixed size cluster samples are discussed. Ericson’s (JRSS B (1969)) Beta Binomial and Dirichlet-Multinomial priors are used. Approximate limits that are appropriate for large samples and small proportions are derived in both cases. These allow asymptotic comparisons of the efficacy of stratified and cluster sampling relative to simple random sampling for estimating small proportions. Procedures for the selection of hyper parameters are also presented.  相似文献   

6.
Starting with a decision theoretic formulation of simultaneous testing of null hypotheses against two-sided alternatives, a procedure controlling the Bayesian directional false discovery rate (BDFDR) is developed through controlling the posterior directional false discovery rate (PDFDR). This is an alternative to Lewis and Thayer [2004. A loss function related to the FDR for random effects multiple comparison. J. Statist. Plann. Inference 125, 49–58.] with a better control of the BDFDR. Moreover, it is optimum in the sense of being the non-randomized part of the procedure maximizing the posterior expectation of the directional per-comparison power rate given the data, while controlling the PDFDR. A corresponding empirical Bayes method is proposed in the context of one-way random effects model. Simulation study shows that the proposed Bayes and empirical Bayes methods perform much better from a Bayesian perspective than the procedures available in the literature.  相似文献   

7.
The problem of sequential estimation of the mean with quadratic loss and fixed cost per observation is considered within the Bayesian framework. Instead of fully sequential sampling, a two-stage sampling technique is introduced to solve the problem. The proposed two-stage procedure is robust in the sense that it does not depend on the distribution of outcome variables and the prior. It is shown to be asymptotically not worse than the optimal fixed-sample-size procedures for the arbitrary distributions, and to be asymptotically Bayes for the distributions of one-parameter exponential family.  相似文献   

8.
Let X1 X2…denote Independent and Identically distributed random vectors whose common distributions form a multiparameter exponential family, and consider the problem of sequentially testing separated hypotheses. It is known that the sequential procedure which continues sampling until the likelihood ratio statistic for testing one of the hypotheses exceeds a given level approximates the optimal Bayesian procedure, under general conditions on the loss function and prior distribution. Here we ask whether the approximate procedure is Bayes risk efficient--that is, whether the ratio of the Bayes risk of the approximate procedure to the Bayes risk of the optimal procedure approaches one as the cost of samping approaches zero. We show that the answer depends on the choice of certain parameters in the approximation and the dimensions of the hypotheses.  相似文献   

9.
This paper studies the problem of designing a curtailed Bayesian sampling plan (CBSP) with Type-II censored data. We first derive the Bayesian sampling plan (BSP) for exponential distributions based on Type-II censored samples in a general loss function. For the conjugate prior with quadratic loss function, an explicit expression for the Bayes decision function is derived. Using the property of monotonicity of the Bayes decision function, a new Bayesian sampling plan modified by the curtailment procedure, called a CBSP, is proposed. It is shown that the risk of CBSP is less than or equal to that of BSP. Comparisons among some existing BSPs and the proposed CBSP are given. Monte Carlo simulations are conducted, and numerical results indicate that the CBSP outperforms those early existing sampling plans if the time loss is considered in the loss function.  相似文献   

10.
A generalized version of inverted exponential distribution (IED) is considered in this paper. This lifetime distribution is capable of modeling various shapes of failure rates, and hence various shapes of aging criteria. The model can be considered as another useful two-parameter generalization of the IED. Maximum likelihood and Bayes estimates for two parameters of the generalized inverted exponential distribution (GIED) are obtained on the basis of a progressively type-II censored sample. We also showed the existence, uniqueness and finiteness of the maximum likelihood estimates of the parameters of GIED based on progressively type-II censored data. Bayesian estimates are obtained using squared error loss function. These Bayesian estimates are evaluated by applying the Lindley's approximation method and via importance sampling technique. The importance sampling technique is used to compute the Bayes estimates and the associated credible intervals. We further consider the Bayes prediction problem based on the observed samples, and provide the appropriate predictive intervals. Monte Carlo simulations are performed to compare the performances of the proposed methods and a data set has been analyzed for illustrative purposes.  相似文献   

11.
In this article two-stage hierarchical Bayesian models are used for the observed occurrences of events in a rectangular region. Two Bayesian variable window scan statistics are introduced to test the null hypothesis that the observed events follow a specified two-stage hierarchical model vs an alternative that indicates a local increase in the average number of observed events in a subregion (clustering). Both procedures are based on a sequence of Bayes factors and their pp-values that have been generated via simulation of posterior samples of the parameters, under the null and alternative hypotheses. The posterior samples of the parameters have been generated by employing Gibbs sampling via introduction of auxiliary variables. Numerical results are presented to evaluate the performance of these variable window scan statistics.  相似文献   

12.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

13.
The performance of the sampling strategy used in the Botswana Aids Impact Survey II (BAISII) has been studied in detail under a randomized response technique. We have shown that alternative strategies based on the Rao–Harley–Cochran (RHC) sampling scheme for the selection of first stage units perform much better than other strategies. In particular, the combination RHC for the selection of first stage units (fsu's) and systematic sampling for the selection of second stage units (ssu's) perform the best when the sample size is small where as the RHC and SRSWOR perform the best when the sample size is large. In view of the present findings it is recommended that the BAISII survey should be studied in more detail incorporating more indicators and increased sample sizes. This is because the BAISII survey design is extensively in use for large scales surveys in Southern African countries.  相似文献   

14.
This paper considers Bayesian sampling plans for exponential distribution with random censoring. The efficient Bayesian sampling plan for a general loss function is derived. This sampling plan possesses the property that it may make decisions prior to the end of the life test experiment, and its decision function is the same as the Bayes decision function which makes decisions based on data collected at the end of the life test experiment. Compared with the optimal Bayesian sampling plan of Chen et al. (2004), the efficient Bayesian sampling plan has the smaller Bayes risk due to the less duration time of life test experiment. Computations of the efficient Bayes risks for the conjugate prior are given. Numerical comparisons between the proposed efficient Bayesian sampling plan and the optimal Bayesian sampling plan of Chen et al. (2004) under two special decision losses, including the quadratic decision loss, are provided. Numerical results also demonstrate that the performance of the proposed efficient sampling plan is superior to that of the optimal sampling plan by Chen et al. (2004).  相似文献   

15.
Stochastic volatility models have been widely appreciated in empirical finance such as option pricing, risk management, etc. Recent advances of Markov chain Monte Carlo (MCMC) techniques made it possible to fit all kinds of stochastic volatility models of increasing complexity within Bayesian framework. In this article, we propose a new Bayesian model selection procedure based on Bayes factor and a classical thermodynamic integration technique named path sampling to select an appropriate stochastic volatility model. The performance of the developed procedure is illustrated with an application to the daily pound/dollar exchange rates data set.  相似文献   

16.
We propose a survey weighted quadratic inference function method for the analysis of data collected from longitudinal surveys, as an alternative to the survey weighted generalized estimating equation method. The procedure yields estimators of model parameters, which are shown to be consistent and have a limiting normal distribution. Furthermore, based on the inference function, a pseudolikelihood ratio type statistic for testing a composite hypothesis on model parameters and a statistic for testing the goodness of fit of the assumed model are proposed. We establish their asymptotic distributions as weighted sums of independent chi‐squared random variables and obtain Rao‐Scott corrections to those statistics leading to a chi‐squared distribution, approximately. We examine the performance of the proposed methods in a simulation study.  相似文献   

17.
We consider the adjustment, based upon a sample of size n, of collections of vectors drawn from either an infinite or finite population. The vectors may be judged to be either normally distributed or, more generally, second-order exchangeable. We develop the work of Goldstein and Wooff (1998) to show how the familiar univariate finite population corrections (FPCs) naturally generalise to individual quantities in the multivariate population. The types of information we gain by sampling are identified with the orthogonal canonical variable directions derived from a generalised eigenvalue problem. These canonical directions share the same co-ordinate representation for all sample sizes and, for equally defined individuals, all population sizes enabling simple comparisons between both the effects of different sample sizes and of different population sizes. We conclude by considering how the FPC is modified for multivariate cluster sampling with exchangeable clusters. In univariate two-stage cluster sampling, we may decompose the variance of the population mean into the sum of the variance of cluster means and the variance of the cluster members within clusters. The first term has a FPC relating to the sampling fraction of clusters, the second term has a FPC relating to the sampling fraction of cluster size. We illustrate how this generalises in the multivariate case. We decompose the variance into two terms: the first relating to multivariate finite population sampling of clusters and the second to multivariate finite population sampling within clusters. We solve two generalised eigenvalue problems to show how to generalise the univariate to the multivariate: each of the two FPCs attaches to one, and only one, of the two eigenbases.  相似文献   

18.
In this article, we study the problem of selecting the best population from among several exponential populations based on interval censored samples using a Bayesian approach. A Bayes selection procedure and a curtailed Bayes selection procedure are derived. We show that these two Bayes selection procedures are equivalent. A numerical example is provided to illustrate the application of the two selection procedure. We also use Monte Carlo simulation to study performance of the two selection procedures. The numerical results of the simulation study demonstrate that the curtailed Bayes selection procedure has good performance because it can substantially reduce the duration time of life test experiment.  相似文献   

19.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

20.
A two-stage stepup procedure is defined and an explicit formula for the FDR of this procedure is derived under any distributional setting. Sets of critical values are determined that provide a control of the FDR of a two-stage stepup procedure under iid mixture model. A class of two-stage FDR procedures modifying the Benjamini–Hochberg (BH) procedure and containing the one given in Storey et al. [2004. Strong control, conservative point estimation and simultanaeous conservative consistency of false discovery rates: a unified approach. J. Roy. Statist. Soc. Ser. B 66, 187–205] is obtained. The FDR controlling property of the Storey–Taylor–Siegmund procedure is proved only under the independence, which is different from that presented by these authors. A single-stage stepup procedure controlling the FDR under any form of dependence, which is different from and in some situations performs better than the Benjamini–Yekutieli (BY) procedure, is given before discussing how to obtain two-stage versions of the BY and this new procedures. Simulations reveal that procedures proposed in this article under the mixture model can perform quite well in terms of improving the FDR control of the BH procedure. However, the similar idea of improving the FDR control of a stepup procedure under any form dependence does not seem to work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号