首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In multiple hypothesis test, an important problem is estimating the proportion of true null hypotheses. Existing methods are mainly based on the p-values of the single tests. In this paper, we propose two new estimations for this proportion. One is a natural extension of the commonly used methods based on p-values and the other is based on a mixed distribution. Simulations show that the first method is comparable with existing methods and performs better under some cases. And the method based on a mixed distribution can get accurate estimators even if the variance of data is large or the difference between the null hypothesis and alternative hypothesis is very small.  相似文献   

2.
ABSTRACT

In this paper, the testing problem for homogeneity in the mixture exponential family is considered. The model is irregular in the sense that each interest parameter forms a part of the null hypothesis (sub-null hypothesis) and the null hypothesis is the union of the sub-null hypotheses. The generalized likelihood ratio test does not distinguish between the sub-null hypotheses. The Supplementary Score Test is proposed by combining two orthogonalized score tests obtained corresponding to the two sub-null hypotheses after proper reparameterization. The test is easy to design and performs better than the generalized likelihood ratio test and other alternative tests by numerical comparisons.  相似文献   

3.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

4.
This article is concerned with testing multiple hypotheses, one for each of a large number of small data sets. Such data are sometimes referred to as high-dimensional, low-sample size data. Our model assumes that each observation within a randomly selected small data set follows a mixture of C shifted and rescaled versions of an arbitrary density f. A novel kernel density estimation scheme, in conjunction with clustering methods, is applied to estimate f. Bayes information criterion and a new criterion weighted mean of within-cluster variances are used to estimate C, which is the number of mixture components or clusters. These results are applied to the multiple testing problem. The null sampling distribution of each test statistic is determined by f, and hence a bootstrap procedure that resamples from an estimate of f is used to approximate this null distribution.  相似文献   

5.
This article deals with Bayes factors as useful Bayesian tools in frequentist testing of a precise hypothesis. A result and several examples are included to justify the definition of Bayes factor for point null hypotheses, without merging the initial distribution with a degenerate distribution on the null hypothesis. Of special interest is the problem of testing a proportion (joint with a natural criterion to compare different tests), the possible presence of nuisance parameters, or the influence of Bayesian sufficiency on this problem. The problem of testing a precise hypothesis under a Bayesian perspective is also considered and two alternative methods to deal with are given.  相似文献   

6.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   

7.
We study two new omnibus goodness of fit tests for exponentiality, each based on a characterization of the exponential distribution via the mean residual life function. The limiting null distributions of the tests statistics are the same as the limiting null distributions of the Kolmogorov-Smirnov and Cramér-von Mises statistics proposed when testing the simple hypothesis that the distribution of the sample variables is uniform on the interval [0, 1]. Work supported by the Deutsche Forschungsgemeinschaft  相似文献   

8.
Abstract

In statistical hypothesis testing, a p-value is expected to be distributed as the uniform distribution on the interval (0, 1) under the null hypothesis. However, some p-values, such as the generalized p-value and the posterior predictive p-value, cannot be assured of this property. In this paper, we propose an adaptive p-value calibration approach, and show that the calibrated p-value is asymptotically distributed as the uniform distribution. For Behrens–Fisher problem and goodness-of-fit test under a normal model, the calibrated p-values are constructed and their behavior is evaluated numerically. Simulations show that the calibrated p-values are superior than original ones.  相似文献   

9.
This paper investigates a new family of goodness-of-fit tests based on the negative exponential disparities. This family includes the popular Pearson's chi-square as a member and is a subclass of the general class of disparity tests (Basu and Sarkar, 1994) which also contains the family of power divergence statistics. Pitman efficiency and finite sample power comparisons between different members of this new family are made. Three asymptotic approximations of the exact null distributions of the negative exponential disparity famiiy of tests are discussed. Some numerical results on the small sample perfomance of this family of tests are presented for the symmetric null hypothesis. It is shown that the negative exponential disparity famiiy, Like the power divergence family, produces a new goodness-of-fit test statistic that can be a very attractive alternative to the Pearson's chi-square. Some numerical results suggest that, application of this test statistic, as an alternative to Pearson's chi-square, could be preferable to the I 2/3 statistic of Cressie and Read (1984) under the use of chi-square critical values.  相似文献   

10.
S. Zhou  R. A. Maller 《Statistics》2013,47(1-2):181-201
Models for populations with immune or cured individuals but with others subject to failure are important in many areas, such as medical statistics and criminology. One method of analysis of data from such populations involves estimating an immune proportion 1 ? p and the parameter(s) of a failure distribution for those individuals subject to failure. We use the exponential distribution with parameter λ for the latter and a mixture of this distribution with a mass 1 ? p at infinity to model the complete data. This paper develops the asymptotic theory of a test for whether an immune proportion is indeed present in the population, i.e., for H 0:p = 1. This involves testing at the boundary of the parameter space for p. We use a likelihood ratio test for H 0. and prove that minus twice the logarithm of the likelihood ratio has as an asymptotic distribution, not the chi-square distribution, but a 50–50 mixture of a chi-square distribution with 1 degree of freedom, and a point mass at 0. The result is proved under an independent censoring assumption with very mild restrictions.  相似文献   

11.
The classical unconditional exact p-value test can be used to compare two multinomial distributions with small samples. This general hypothesis requires parameter estimation under the null which makes the test severely conservative. Similar property has been observed for Fisher's exact test with Barnard and Boschloo providing distinct adjustments that produce more powerful testing approaches. In this study, we develop a novel adjustment for the conservativeness of the unconditional multinomial exact p-value test that produces nominal type I error rate and increased power in comparison to all alternative approaches. We used a large simulation study to empirically estimate the 5th percentiles of the distributions of the p-values of the exact test over a range of scenarios and implemented a regression model to predict the values for two-sample multinomial settings. Our results show that the new test is uniformly more powerful than Fisher's, Barnard's, and Boschloo's tests with gains in power as large as several hundred percent in certain scenarios. Lastly, we provide a real-life data example where the unadjusted unconditional exact test wrongly fails to reject the null hypothesis and the corrected unconditional exact test rejects the null appropriately.  相似文献   

12.
The two-sided power (TSP) distribution is a flexible two-parameter distribution having uniform, power function and triangular as sub-distributions, and it is a reasonable alternative to beta distribution in some cases. In this work, we introduce the TSP-binomial model which is defined as a mixture of binomial distributions, with the binomial parameter p having a TSP distribution. We study its distributional properties and demonstrate its use on some data. It is shown that the newly defined model is a useful candidate for overdispersed binomial data.  相似文献   

13.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

14.
Summary.  We consider a finite mixture model with k components and a kernel distribution from a general one-parameter family. The problem of testing the hypothesis k =2 versus k 3 is studied. There has been no general statistical testing procedure for this problem. We propose a modified likelihood ratio statistic where under the null and the alternative hypotheses the estimates of the parameters are obtained from a modified likelihood function. It is shown that estimators of the support points are consistent. The asymptotic null distribution of the modified likelihood ratio test proposed is derived and found to be relatively simple and easily applied. Simulation studies for the asymptotic modified likelihood ratio test based on finite mixture models with normal, binomial and Poisson kernels suggest that the test proposed performs well. Simulation studies are also conducted for a bootstrap method with normal kernels. An example involving foetal movement data from a medical study illustrates the testing procedure.  相似文献   

15.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   

16.
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1?λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.  相似文献   

17.
A metaanalytic estimator of the proportion of positives in a sequence of screening experiments is proposed. The distribution-free estimator is based on the empirical distribution of P-values from individual experiments, which is uniform under the global null hypotheses of no positives in the sequence of experiments performed. Under certain regularity conditions, the proportion of positives corresponds to the derivative of this distribution under the alternative hypothesis of the existence of some positives. The statistical properties of the estimator are established, including its bias, variance, and rate of convergence to normality. Optimal estimators with minimum mean squared error are also developed under specific alternative hypotheses. The application of the proposed methods is illustrated using data from a sequence of screening experiments with chemicals to determine their carcinogenic potential.  相似文献   

18.
In this article, we focus on the general k-step step-stress accelerated life tests with Type-I censoring for two-parameter Weibull distributions based on the tampered failure rate (TFR) model. We get the optimum design for the tests under the criterion of the minimization of the asymptotic variance of the maximum likelihood estimate of the pth percentile of the lifetime under the normal operating conditions. Optimum test plans for the simple step-stress accelerated life tests under Type-I censoring are developed for the Weibull distribution and the exponential distribution in particular. Finally, an example is provided to illustrate the proposed design and a sensitivity analysis is conducted to investigate the robustness of the design.  相似文献   

19.
The problem of estimating the total number of trials n in a binomial distribution is reconsidered in this article for both cases of known and unknown probability of success p from the Bayesian viewpoint. Bayes and empirical Bayes point estimates for n are proposed under the assumption of a left-truncated prior distribution for n and a beta prior distribution for p. Simulation studies are provided in this article in order to compare the proposed estimate with the most familiar n estimates.  相似文献   

20.
We consider the problem of estimating the proportion θ of true null hypotheses in a multiple testing context. The setup is classically modelled through a semiparametric mixture with two components: a uniform distribution on interval [0,1] with prior probability θ and a non‐parametric density f . We discuss asymptotic efficiency results and establish that two different cases occur whether f vanishes on a non‐empty interval or not. In the first case, we exhibit estimators converging at a parametric rate, compute the optimal asymptotic variance and conjecture that no estimator is asymptotically efficient (i.e. attains the optimal asymptotic variance). In the second case, we prove that the quadratic risk of any estimator does not converge at a parametric rate. We illustrate those results on simulated data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号