首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.  相似文献   

2.
The power-generalized Weibull probability distribution is very often used in survival analysis mainly because different values of its parameters allow for various shapes of hazard rate such as monotone increasing/decreasing, ∩-shaped, ∪-shaped, or constant. Modified chi-squared tests based on maximum likelihood estimators of parameters that are shown to be -consistent are proposed. Power of these tests against exponentiated Weibull, three-parameter Weibull, and generalized Weibull distributions is studied using Monte Carlo simulations. It is proposed to use the left-tailed rejection region because these tests are biased with respect to the above alternatives if one will use the right-tailed rejection region. It is also shown that power of the McCulloch test investigated can be two or three times higher than that of Nikulin–Rao–Robson test with respect to the alternatives considered if expected cell frequencies are about 5.  相似文献   

3.
The problem of outliers in statistical data has attracted many researchers for a long time. Consequently, numerous outlier detection methods have been proposed in the statistical literature. However, no consensus has emerged as to which method is uniformly better than the others or which one is recommended for use in practical situations. In this article, we perform an extensive comparative Monte Carlo simulation study to assess the performance of the multiple outlier detection methods that are either recently proposed or frequently cited in the outlier detection literature. Our simulation experiments include a wide variety of realistic and challenging regression scenarios. We give recommendations on which method is superior to others under what conditions.  相似文献   

4.
We consider the problem of simultaneously estimating Poisson rate differences via applications of the Hsu and Berger stepwise confidence interval method (termed HBM), where comparisons to a common reference group are performed. We discuss continuity-corrected confidence intervals (CIs) and investigate the HBM performance with a moment-based CI, and uncorrected and corrected for continuity Wald and Pooled confidence intervals (CIs). Using simulations, we compare nine individual CIs in terms of coverage probability and the HBM with nine intervals in terms of family-wise error rate (FWER) and overall and local power. The simulations show that these statistical properties depend highly on parameter settings.  相似文献   

5.
This article deals with the use of the Closed Testing approach in Multiple Comparison Procedures (MCPs). MCPs occur when after rejection of a global hypothesis of no effect of a given treatment, a set of pairwise comparisons between levels of that treatment are performed in order to find out significant differences between levels. Given a set of partial hypotheses, such as the pairwise comparisons of an MCP, the Closed Testing approach concentrates on testing the family of all the non empty intersections of these partial hypotheses. The results of our simulation study highlight the advantages of closed testing methods and prove that they are more powerful than other classic MCPs controlling the FWER.  相似文献   

6.
Normal theory separation and allocation problems are discussed from a predictive point of view. Influence statistics are defined and employed to ascertain the impact that particular observations will have on the inferential goals—allocation of future observations, separation between populations, and the determination of probabilities for future cases. Methods are illustrated on a collection of financial data taken from Johnson and Wichern (1982).  相似文献   

7.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   

8.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

9.
Twenty-one volunteers tested the usability of revisions to the Texas A&M University Libraries' SFX® OpenURL link resolver menus, including the addition of Ex Libris' new bX™ recommendation service and a plug-in which pulls additional information about the journal into the menu. The volunteers also evaluated the quality and desirability of the bX recommendations and discussed their preferences for help options and full-text format. Results of the usability testing are reported along with the resultant menu changes. This study will be of interest to librarians implementing or redesigning OpenURL menus as well as those interested in the user experience.  相似文献   

10.
The aim of this article is to discuss homogeneity testing of the exponential distribution. We introduce the exact likelihood ratio test of homogeneity in the subpopulation model, ELR, and the exact likelihood ratio test of homogeneity against the two-components subpopulation alternative, ELR2. The ELR test is asymptotically optimal in the Bahadur sense when the alternative consists of sampling from a fixed number of components. Thus, in some setups the ELR is superior to frequently used tests for exponential homogeneity which are based on the EM algorithm (like the MLRT, ADDS, and D-tests). One important example of superiority of ELR and ELR2 tests is the case of lower contamination. We demonstrate this fact by both theoretical comparisons and simulations.  相似文献   

11.
This article presents the “centered” method for establishing cell boundaries in the X 2 goodness-of-fit test, which when applied to common stock returns significantly reduces the high bias of the test statistic associated with the traditional Mann–Wald equiprobable approach. A modified null hypothesis is proposed to incorporate explicitly the usually implicit assumption that the observed discrete returns are “approximated” by the hypothesized continuous density. Simulation results indicate extremely biased X 2 values resulting from the traditional approach, particularly for low-priced and low volatile stocks. Daily stock returns for 114 firms are tested to determine whether they are approximated by a normal or one of several normal mixture densities. Results indicate a significantly higher degree of fit than that reported elsewhere to date.  相似文献   

12.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes.  相似文献   

13.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   

14.
This article proposes new model checks for dynamic count models. Both portmanteau and omnibus-type tests for lack of residual autocorrelation are considered. The resulting test statistics are asymptotically pivotal when innovations are uncorrelated but possibly exhibit higher order serial dependence. Moreover, the tests are able to detect local alternatives converging to the null at the parametric rate T? 1/2, with T the sample size. The finite sample performance of the test statistics are examined by means of Monte Carlo experiments. Using a dataset on U.S. corporate bankruptcies, the proposed tests are applied to check if different risk models are correctly specified. Supplementary materials for this article are available online.  相似文献   

15.
Two test statistics are proposed for the change-point problem with repeated values when the data follow an exponential distribution. The properties of these two statistics have been studied and their asymptotic distributions under the alternative have been derived. The powers of the two test statistics are compared. Real-data examples are presented to illustrate the application of these tests.  相似文献   

16.
Built on Skaug and Tjøstheim's approach, this paper proposes a new test for serial independence by comparing the pairwise empirical distribution functions of a time series with the products of its marginals for various lags, where the number of lags increases with the sample size and different lags are assigned different weights. Typically, the more recent information receives a larger weight. The test has some appealing attributes. It is consistent against all pairwise dependences and is powerful against alternatives whose dependence decays to zero as the lag increases. Although the test statistic is a weighted sum of degenerate Cramér–von Mises statistics, it has a null asymptotic N (0, 1) distribution. The test statistic and its limit distribution are invariant to any order preserving transformation. The test applies to time series whose distributions can be discrete or continuous, with possibly infinite moments. Finally, the test statistic only involves ranking the observations and is computationally simple. It has the advantage of avoiding smoothed nonparametric estimation. A simulation experiment is conducted to study the finite sample performance of the proposed test in comparison with some related tests.  相似文献   

17.
Summary.  Estimation of the number or proportion of true null hypotheses in multiple-testing problems has become an interesting area of research. The first important work in this field was performed by Schweder and Spjøtvoll. Among others, they proposed to use plug-in estimates for the proportion of true null hypotheses in multiple-test procedures to improve the power. We investigate the problem of controlling the familywise error rate FWER when such estimators are used as plug-in estimators in single-step or step-down multiple-test procedures. First we investigate the case of independent p -values under the null hypotheses and show that a suitable choice of plug-in estimates leads to control of FWER in single-step procedures. We also investigate the power and study the asymptotic behaviour of the number of false rejections. Although step-down procedures are more difficult to handle we briefly consider a possible solution to this problem. Anyhow, plug-in step-down procedures are not recommended here. For dependent p -values we derive a condition for asymptotic control of FWER and provide some simulations with respect to FWER and power for various models and hypotheses.  相似文献   

18.
The problem of testing independence in the multinormal case is considered in this paper. The non-null distribution of the likelihood ratio criterion is obtained for the case of two subvectors by using a simple straightforward technique. The null case as well as the known cases are also verified.  相似文献   

19.
The modified Tiku test and the modified likelihood ratio test proposed by Tiku and Vaughan (1991) for k=2 exponential populations are extended to . k > 2 populations. These tests are shown to be more powerful than the test proposed by Kambo and Awad (1985). Unlike the Kambo-Awad test, the proposed tests are shown to have almost symmetric power functions. Further, these tests can be applied when there is both left or right censoring present, in contrast to the tests of Sukhatme (1937), Bain and Englehardt (1991) and Elewa et al. (1992), who assume that there is no left censoring.  相似文献   

20.
In multiple hypothesis test, an important problem is estimating the proportion of true null hypotheses. Existing methods are mainly based on the p-values of the single tests. In this paper, we propose two new estimations for this proportion. One is a natural extension of the commonly used methods based on p-values and the other is based on a mixed distribution. Simulations show that the first method is comparable with existing methods and performs better under some cases. And the method based on a mixed distribution can get accurate estimators even if the variance of data is large or the difference between the null hypothesis and alternative hypothesis is very small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号