首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Abstract.  An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F -statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one-to-one mapping – which we call calibration curve – between the posterior probability of the null hypothesis and the classical bi p -value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p -value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p -values. A specific illustration of the variable selection problem is given.  相似文献   

2.
We propose a new meta-analysis method to pool univariate p-values across independent studies and we compare our method with that of Fisher, Stouffer, and George through simulations and identify sub-spaces where each of these methods are optimal and propose a strategy to choose the best meta-analysis method under different sub-spaces. We compare these meta-analysis approaches using p-values from periodicity tests of 4,940 S. Pombe genes from 10 independent time-course experiments and show that our new approach ranks the periodic, conserved, and cycling genes much higher, and detects at least as many genes among the top 1,000 genes, compared to other methods.  相似文献   

3.
Combining p-values from statistical tests across different studies is the most commonly used approach in meta-analysis for evolutionary biology. The most commonly used p-value combination methods mainly incorporate the z-transform tests (e.g., the un-weighted z-test and the weighted z-test) and the gamma-transform tests (e.g., the CZ method [Z. Chen, W. Yang, Q. Liu, J.Y. Yang, J. Li, and M.Q. Yang, A new statistical approach to combining p-values using gamma distribution and its application to genomewide association study, Bioinformatics 15 (2014), p. S3]). However, among these existing p-value combination methods, no method is uniformly most powerful in all situations [Chen et al. 2014]. In this paper, we propose a meta-analysis method based on the gamma distribution, MAGD, by pooling the p-values from independent studies. The newly proposed test, MAGD, allows for flexible accommodating of the different levels of heterogeneity of effect sizes across individual studies. The MAGD simultaneously retains all the characters of the z-transform tests and the gamma-transform tests. We also propose an easy-to-implement resampling approach for estimating the empirical p-values of MAGD for the finite sample size. Simulation studies and two data applications show that the proposed method MAGD is essentially as powerful as the z-transform tests (the gamma-transform tests) under the circumstance with the homogeneous (heterogeneous) effect sizes across studies.  相似文献   

4.
A permutation testing approach in multivariate mixed models is presented. The solutions proposed allow for testing between-unit effect; they are exact under some assumptions, while approximated in the more general case. The classes of models comprised by this approach include generalized linear models, vector generalized additive models and other nonparametric models based on smoothing. Moreover it does not assume observations of different units to have the same distribution. The extensions to a multivariate framework are presented and discussed. The proposed multivariate tests exploit the dependence among variables, hence increasing the power with respect to other standard solutions (e.g. Bonferroni correction) which combine many univariate tests in an overall one. Examples are given of two applications to real data from psychological and ecological studies; a simulation study provides some insight into the unbiasedness of the tests and their power. The methods were implemented in the R package flip, freely available on CRAN.  相似文献   

5.
SOME MODELS FOR OVERDISPERSED BINOMIAL DATA   总被引:1,自引:0,他引:1  
Various models are currently used to model overdispersed binomial data. It is not always clear which model is appropriate for a given situation. Here we examine the assumptions and discuss the problems and pitfalls of some of these models. We focus on clustered data with one level of nesting, briefly touching on more complex strata and longitudinal data. The estimation procedures are illustrated and some critical comments are made about the various models. We indicate which models are restrictive and how and which can be extended to model more complex situations. In addition some inadequacies in testing procedures are noted. Recommendations as to which models should be used, and when, are made.  相似文献   

6.
Abstract

In statistical hypothesis testing, a p-value is expected to be distributed as the uniform distribution on the interval (0, 1) under the null hypothesis. However, some p-values, such as the generalized p-value and the posterior predictive p-value, cannot be assured of this property. In this paper, we propose an adaptive p-value calibration approach, and show that the calibrated p-value is asymptotically distributed as the uniform distribution. For Behrens–Fisher problem and goodness-of-fit test under a normal model, the calibrated p-values are constructed and their behavior is evaluated numerically. Simulations show that the calibrated p-values are superior than original ones.  相似文献   

7.
In this article, we consider exact tests in panel data regression model with one-way and two-way error component for which no exact tests are available. Exact inferences using generalized p-values are obtained. When there are several groups of panel data, test for equal coefficients under one-way and two-way error component are derived.  相似文献   

8.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

9.
Hierarchical models are popular in many applied statistics fields including Small Area Estimation. One well known model employed in this particular field is the Fay–Herriot model, in which unobservable parameters are assumed to be Gaussian. In Hierarchical models assumptions about unobservable quantities are difficult to check. For a special case of the Fay–Herriot model, Sinharay and Stern [2003. Posterior predictive model checking in Hierarchical models. J. Statist. Plann. Inference 111, 209–221] showed that violations of the assumptions about the random effects are difficult to detect using posterior predictive checks. In this present paper we consider two extensions of the Fay–Herriot model in which the random effects are assumed to be distributed according to either an exponential power (EP) distribution or a skewed EP distribution. We aim to explore the robustness of the Fay–Herriot model for the estimation of individual area means as well as the empirical distribution function of their ‘ensemble’. Our findings, which are based on a simulation experiment, are largely consistent with those of Sinharay and Stern as far as the efficient estimation of individual small area parameters is concerned. However, when estimating the empirical distribution function of the ‘ensemble’ of small area parameters, results are more sensitive to the failure of distributional assumptions.  相似文献   

10.
We consider portmanteau tests for testing the adequacy of structural vector autoregressive moving-average (VARMA) models under the assumption that the errors are uncorrelated but not necessarily independent. The structural forms are mainly used in econometrics to introduce instantaneous relationships between economic variables. We first study the joint distribution of the quasi-maximum likelihood estimator (QMLE) and the noise empirical autocovariances. We then derive the asymptotic distribution of residual empirical autocovariances and autocorrelations under weak assumptions on the noise. We deduce the asymptotic distribution of the Ljung-Box (or Box-Pierce) portmanteau statistics in this framework. It is shown that the asymptotic distribution of the portmanteau tests is that of a weighted sum of independent chi-squared random variables, which can be quite different from the usual chi-squared approximation used under independent and identically distributed (iid) assumptions on the noise. Hence we propose a method to adjust the critical values of the portmanteau tests. Monte Carlo experiments illustrate the finite sample performance of the modified portmanteau test.  相似文献   

11.
Accurate and efficient methods to detect unusual clusters of abnormal activity are needed in many fields such as medicine and business. Often the size of clusters is unknown; hence, multiple (variable) window scan statistics are used to identify clusters using a set of different potential cluster sizes. We give an efficient method to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. We define a Markov chain to efficiently keep track of probabilities needed to compute p-values for the statistic. The state space of the Markov chain is set up by a criterion developed to identify strings that are associated with observing the specified values of the statistic. Using our algorithm, we identify cases where the available approximations do not perform well. We demonstrate our methods by detecting unusual clusters of made free throw shots by National Basketball Association players during the 2009–2010 regular season.  相似文献   

12.
In biological, medical, and social sciences, multilevel structures are very common. Hierarchical models that take the dependencies among subjects within the same level are necessary. In this article, we introduce a semiparametric hierarchical composite quantile regression model for hierarchical data. This model (i) keeps the easy interpretability of the simple parametric model; (ii) retains some of the flexibility of the complex non parametric model; (iii) relaxes the assumptions that the noise variances and higher-order moments exist and are finite; and (iv) takes the dependencies among subjects within the same hierarchy into consideration. We establish the asymptotic properties of the proposed estimators. Our simulation results show that the proposed method is more efficient than the least-squares-based method for many non normally distributed errors. We illustrate our methodology with a real biometric data set.  相似文献   

13.
Summary.  Microarrays are a powerful new technology that allow for the measurement of the expression of thousands of genes simultaneously. Owing to relatively high costs, sample sizes tend to be quite small. If investigators apply a correction for multiple testing, a very small p -value will be required to declare significance. We use modifications to Chebyshev's inequality to develop a testing procedure that is nonparametric and yields p -values on the interval [0, 1]. We evaluate its properties via simulation and show that it both holds the type I error rate below nominal levels in almost all conditions and can yield p -values denoting significance even with very small sample sizes and stringent corrections for multiple testing.  相似文献   

14.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

15.
We show that smoothing spline, intrinsic autoregression (IAR) and state-space model can be formulated as partially specified random-effect model with singular precision (SP). Various fitting methods have been suggested for the aforementioned models and this paper investigates the relationships among them, once the models have been placed under a single framework. Some methods have been previously shown to give the best linear unbiased predictors (BLUPs) under some random-effect models and here we show that they are in fact uniformly BLUPs (UBLUPs) under a class of models that are generated by the SP of random effects. We offer some new interpretations of the UBLUPs under models of SP and define BLUE and BLUP in these partially specified models without having to specify the covariance. We also show how the full likelihood inferences for random-effect models can be made for these models, so that the maximum likelihood (ML) and restricted maximum likelihood (REML) estimators can be used for the smoothing parameters in splines, etc.  相似文献   

16.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

17.
In this paper we consider the more realistic aspect of accelerated life testing wherein the stress on an unfailed item is allowed to increase at a preassigned test time. Such tests are known as step-stress tests. Our approach is nonparametric in that we do not make any assumptions about the underlying distribution of life lengths. We introduce a model for step-stress testing which is based on the ideas of shock models and of wear processes. This model unifies and generalizes two previously proposed models for step-stress testing. We propose an estimator for the life distribution under use conditions stress and show that this estimator is strongly consistent.  相似文献   

18.
In this article two-stage hierarchical Bayesian models are used for the observed occurrences of events in a rectangular region. Two Bayesian variable window scan statistics are introduced to test the null hypothesis that the observed events follow a specified two-stage hierarchical model vs an alternative that indicates a local increase in the average number of observed events in a subregion (clustering). Both procedures are based on a sequence of Bayes factors and their pp-values that have been generated via simulation of posterior samples of the parameters, under the null and alternative hypotheses. The posterior samples of the parameters have been generated by employing Gibbs sampling via introduction of auxiliary variables. Numerical results are presented to evaluate the performance of these variable window scan statistics.  相似文献   

19.
In this paper, we consider testing the location parameter with multilevel (or hierarchical) data. A general family of weighted test statistics is introduced. This family includes extensions to the case of multilevel data of familiar procedures like the t, the sign and the Wilcoxon signed-rank tests. Under mild assumptions, the test statistics have a null limiting normal distribution which facilitates their use. An investigation of the relative merits of selected members of the family of tests is achieved theoretically by deriving their asymptotic relative efficiency (ARE) and empirically via a simulation study. It is shown that the performance of a test depends on the clusters configurations and on the intracluster correlations. Explicit formulas for optimal weights and a discussion of the impact of omitting a level are provided for 2 and 3-level data. It is shown that using appropriate weights can greatly improve the performance of the tests. Finally, the use of the new tests is illustrated with a real data example.  相似文献   

20.
Exact conditional p-values based on the likelihood-ratio statistic in logistic regression require accurate computation of the supremum of the likelihood function, particularly for outcomes in the sample space that represent completely-separated or quasi-completely-separated data sets. Current software does not always handle these cases well. Three simple solutions are proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号