首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper we investigate the asymptotic critical value behaviour of certain multiple decision procedures as e.g. simultaneous confidence intervals and simultaneous as well as stepwise multiple test procedures. Supposing that n hypotheses or parameters of interest are under consideration we investigate the critical value behaviour when n increases. More specifically, we answer e.g. the question by which amount the lengths of confidence intervals increase when an additional parameter is added to the statistical analysis. Furthermore, critical values of different multiple decision procedures as for instance step-down and step-up procedures will be compared. Some general theoretic results are derived and applied for various distributions.  相似文献   

2.
We are concerned with three different types of multivariate chi-square distributions. Their members play important roles as limiting distributions of vectors of test statistics in several applications of multiple hypotheses testing. We explain these applications and consider the computation of multiplicity-adjusted p-values under the respective global hypothesis. By means of numerical examples, we demonstrate how much gain in level exhaustion or, equivalently, power can be achieved with corresponding multivariate multiple tests compared with approaches which are only based on univariate marginal distributions and do not take the dependence structure among the test statistics into account. As a further contribution of independent value, we provide an overview of essentially all analytic formulas for computing multivariate chi-square probabilities of the considered types which are available up to present. These formulas were scattered in the previous literature and are presented here in a unified manner.  相似文献   

3.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

4.
Multiple Hypotheses Testing with Weights   总被引:2,自引:0,他引:2  
In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights. Some rationale for incorporating weights in multiple hypotheses testing are discussed. Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems. An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained. This model serves as a vehicle for demonstrating the different implications of the approaches to weighting. Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate. Extensions and modifications of the procedures based on Simes (1986) are given. These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses. The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights.  相似文献   

5.
Multiple comparison procedures are extended to designs consisting of several groups, where the treatment means are to be compared within each group. This may arise in two-factor experiments, with a significant interaction term, when one is interested in comparing the levels of one factor at each level of the other factor. A general approach is presented for deriving the distributions and calculating critical points, following three papers which dealt with two specific procedures. These points are used for constructing simultaneous confidence intervals over some restricted set of contrasts among treatment means in each of the groups. Tables of critical values are provided for two procedures and an application is demonstrated. Some extensions are presented for the case of possible different sets of contrasts and also for unequal variances in the various groups.  相似文献   

6.
The Benjamini-Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.  相似文献   

7.
The Benjamini–Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.  相似文献   

8.
In this paper, we propose a new generalized multiple frequency model to analyze non-stationary signals. The model under the assumption of additive stationary errors can be used quite effectively to analyze different signals. We propose the usual least-squares estimators to estimate the unknown parameters and it is shown that the estimators are strongly consistent. We obtain the asymptotic distributions also. The performance of the proposed model is compared with the multiple frequency model using Monte Carlo simulations. Finally, several real data are analyzed using both the proposed model and the multiple frequency model.  相似文献   

9.
The author considers studies with multiple dependent primary endpoints. Testing hypotheses with multiple primary endpoints may require unmanageably large populations. Composite endpoints consisting of several binary events may be used to reduce a trial to a manageable size. The primary difficulties with composite endpoints are that different endpoints may have different clinical importance and that higher‐frequency variables may overwhelm effects of smaller, but equally important, primary outcomes. To compensate for these inconsistencies, we weight each type of event, and the total number of weighted events is counted. To reflect the mutual dependency of primary endpoints and to make the weighting method effective in small clinical trials, we use the Bayesian approach. We assume a multinomial distribution of multiple endpoints with Dirichlet priors and apply the Bayesian test of noninferiority to the calculation of weighting parameters. We use composite endpoints to test hypotheses of superiority in single‐arm and two‐arm clinical trials. The composite endpoints have a beta distribution. We illustrate this technique with an example. The results provide a statistical procedure for creating composite endpoints. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

10.
ABSTRACT

We consider multiple regression (MR) model averaging using the focused information criterion (FIC). Our approach is motivated by the problem of implementing a mean-variance portfolio choice rule. The usual approach is to estimate parameters ignoring the intention to use them in portfolio choice. We develop an estimation method that focuses on the trading rule of interest. Asymptotic distributions of submodel estimators in the MR case are derived using a localization framework. The localization is of both regression coefficients and error covariances. Distributions of submodel estimators are used for model selection with the FIC. This allows comparison of submodels using the risk of portfolio rule estimators. FIC model averaging estimators are then characterized. This extension further improves risk properties. We show in simulations that applying these methods in the portfolio choice case results in improved estimates compared with several competitors. An application to futures data shows superior performance as well.  相似文献   

11.
Summary.  We discuss a method for combining different but related longitudinal studies to improve predictive precision. The motivation is to borrow strength across clinical studies in which the same measurements are collected at different frequencies. Key features of the data are heterogeneous populations and an unbalanced design across three studies of interest. The first two studies are phase I studies with very detailed observations on a relatively small number of patients. The third study is a large phase III study with over 1500 enrolled patients, but with relatively few measurements on each patient. Patients receive different doses of several drugs in the studies, with the phase III study containing significantly less toxic treatments. Thus, the main challenges for the analysis are to accommodate heterogeneous population distributions and to formalize borrowing strength across the studies and across the various treatment levels. We describe a hierarchical extension over suitable semiparametric longitudinal data models to achieve the inferential goal. A nonparametric random-effects model accommodates the heterogeneity of the population of patients. A hierarchical extension allows borrowing strength across different studies and different levels of treatment by introducing dependence across these nonparametric random-effects distributions. Dependence is introduced by building an analysis of variance (ANOVA) like structure over the random-effects distributions for different studies and treatment combinations. Model structure and parameter interpretation are similar to standard ANOVA models. Instead of the unknown normal means as in standard ANOVA models, however, the basic objects of inference are random distributions, namely the unknown population distributions under each study. The analysis is based on a mixture of Dirichlet processes model as the underlying semiparametric model.  相似文献   

12.
We are concerned with a situation in which we would like to test multiple hypotheses with tests whose p‐values cannot be computed explicitly but can be approximated using Monte Carlo simulation. This scenario occurs widely in practice. We are interested in obtaining the same rejections and non‐rejections as the ones obtained if the p‐values for all hypotheses had been available. The present article introduces a framework for this scenario by providing a generic algorithm for a general multiple testing procedure. We establish conditions that guarantee that the rejections and non‐rejections obtained through Monte Carlo simulations are identical to the ones obtained with the p‐values. Our framework is applicable to a general class of step‐up and step‐down procedures, which includes many established multiple testing corrections such as the ones of Bonferroni, Holm, Sidak, Hochberg or Benjamini–Hochberg. Moreover, we show how to use our framework to improve algorithms available in the literature in such a way as to yield theoretical guarantees on their results. These modifications can easily be implemented in practice and lead to a particular way of reporting multiple testing results as three sets together with an error bound on their correctness, demonstrated exemplarily using a real biological dataset.  相似文献   

13.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

14.
Abstract.  We propose a confidence envelope for false discovery control when testing multiple hypotheses of association simultaneously. The method is valid under arbitrary and unknown dependence between the test statistics and allows for an exploratory approach when choosing suitable rejection regions while still retaining strong control over the proportion of false discoveries.  相似文献   

15.
ABSTRACT

We split the components corresponding to the disability-free survival probability, and the disability survival probability. Our analysis is conducted for men and women separately, for age groups over 64 years. We discuss the estimation of a multiple state model under several scenarios when only a single survey of cross-sectional data is available. The conclusions are used to discuss the disability level of the Spanish elderly population and are helpful to develop welfare programs and insurance products.  相似文献   

16.
In practical settings such as microarray data analysis, multiple hypotheses with dependence within but not between equal-sized blocks often need to be tested. We consider an adaptive BH procedure to test the hypotheses. Under the condition of positive regression dependence on a subset of the true null hypotheses, the proposed adaptive procedure is shown to control the false discovery rate. The proposed approach is compared to the existing methods in simulation under block dependence and totally uniform pairwise dependence. It is observed that the proposed method performs better than the existing methods in several situations.  相似文献   

17.
Several estimators are examined for the simple linear regression model under a controlled, experimental situation with multiple observations at each design point. The model is examined under normal and non-normal error distributions and mild heterogeneity of variances across the chosen design points. We consider the ordinary, generalized, and estimated generalized least squares estimators and several examples of M estimators. The asymptotic properties of the M estimator using the Huber ψ are presented under these conditions for the multiple regression model. A simulation study is also presented which indicates that the M estimator possesses strong robustness properties under the presence of both non-normality and mild heteroscedasticity o£ errors. Finally, the M estimates are compared to the least squares estimates in two examples.  相似文献   

18.
Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR.  相似文献   

19.
A computational problem in many fields is to evaluate multiple integrals and expectations simultaneously. Consider probability distributions with unnormalized density functions indexed by parameters on a 2-dimensional grid, and assume that samples are simulated from distributions on a subgrid. Examples of such unnormalized density functions include the observed-data likelihoods in the presence of missing data and the prior times the likelihood in Bayesian inference. There are various methods using a single sample only or multiple samples jointly to compute each integral. Path sampling seems a compromise, using samples along a 1-dimensional path to compute each integral. However, different choices of the path lead to different estimators, which should ideally be identical. We propose calibrated estimators by the method of control variates to exploit such constraints for variance reduction. We also propose biquadratic interpolation to approximate integrals with parameters outside the subgrid, consistently with the calibrated estimators on the subgrid. These methods can be extended to compute differences of expectations through an auxiliary identity for path sampling. Furthermore, we develop stepwise bridge-sampling methods in parallel but complementary to path sampling. In three simulation studies, the proposed methods lead to substantially reduced mean squared errors compared with existing methods.  相似文献   

20.
A computational problem in many fields is to estimate simultaneously multiple integrals and expectations, assuming that the data are generated by some Monte Carlo algorithm. Consider two scenarios in which draws are simulated from multiple distributions but the normalizing constants of those distributions may be known or unknown. For each scenario, existing estimators can be classified as using individual samples separately or using all the samples jointly. The latter pooled‐sample estimators are statistically more efficient but computationally more costly to evaluate than the separate‐sample estimators. We develop a cluster‐sample approach to obtain computationally effective estimators, after draws are generated for each scenario. We divide all the samples into mutually exclusive clusters and combine samples from each cluster separately. Furthermore, we exploit a relationship between estimators based on samples from different clusters to achieve variance reduction. The resulting estimators, compared with the pooled‐sample estimators, typically yield similar statistical efficiency but have reduced computational cost. We illustrate the value of the new approach by two examples for an Ising model and a censored Gaussian random field. The Canadian Journal of Statistics 41: 151–173; 2013 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号