首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
孔圣元 《统计研究》1997,14(3):58-64
敏感性问题的问卷调查模型研究——“随机变量和”模型孔圣元ABSTRACTBasedonthetheoryof“sumofrandomvariables”distribution,theauthorhasputforwardanewidea(“sumo...  相似文献   

2.
The paper introduces a x2-approximation to multivariate kurtosis b2,punder normality. It requires calculating the third moment of b2,pwhich is obtained. We compare the approximation with simulated percentage points and the normal approximation, and find it to be adequate for p=l and 2. For p=3, the simple average of this estimate and the normal approximation is found to be generally superior to either approximation on its own. For p=4, the normal approximation is best for non-extreme values of ∝  相似文献   

3.
设立原假设中的辩证分析   总被引:1,自引:0,他引:1       下载免费PDF全文
王静  史济洲 《统计研究》2010,27(6):95-99
 该文从统计决策角度提出了“如何选取单侧假设检验中原假设”的问题,综合分析了相关文献中对该问题的各种解释及解决方法,同时运用唯物辩证法原理来理解并解决该问题中的各种矛盾,最后简要阐述了假设检验、可信区间和统计决策之间的辨证关系。  相似文献   

4.
King’s Point Optimal (PO) test of a simple null hypothesis is useful in a number of ways, for example it can be used to trace the power envelope against which existing tests can be compared. However, this test cannot always be constructed when testing a composite null hypothesis. It is suggested in the literature that approximate PO (APO) tests can overcome this problem, but they also have some drawbacks. This paper investigates if King’s PO test can be used for testing a composite null in the presence of nuisance parameters via a maximized Monte Carlo (MMC) approach, with encouraging results.  相似文献   

5.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

6.
Multiple hypothesis testing literature has recently experienced a growing development with particular attention to the control of the false discovery rate (FDR) based on p-values. While these are not the only methods to deal with multiplicity, inference with small samples and large sets of hypotheses depends on the specific choice of the p-value used to control the FDR in the presence of nuisance parameters. In this paper we propose to use the partial posterior predictive p-value [Bayarri, M.J., Berger, J.O., 2000. p-values for composite null models. J. Amer. Statist. Assoc. 95, 1127–1142] that overcomes this difficulty. This choice is motivated by theoretical considerations and examples. Finally, an application to a controlled microarray experiment is presented.  相似文献   

7.
In this note we suggest a class of two-sample test statistics iich have, as their null distribution,the Mann-Whitney-Wilcoxon

ill distribution. An interesting property of these statistics is lat many are not rank statistics; that is, they cannot be coumplited from, the ranks of the original observations. However, they %e still distribution-free when the two populations are identi-il. This class contains the Mann-Whitney-Wilcoxon test for the niality of location parameters of two distributions and a two-aiaple test for equality of spreads of two distributions recently ivestigated by Fligner and Killeen (1976)  相似文献   

8.
Severe departures from normality occur frequently for null distributions of statistics associated with applications of mulLi-response permutation procedures (MRPP) for either small or large finite populations. This paper describes the commonly encountered situation associated with asymptotic non-normality for null distributions of MRPP statistics which does not depend on the underlying multivariate distribution. In addition, this paper establishes the existence of a non-degenerate underlying distribution for which the null distributions of MRPP statistics are asymptotically non-normal for essentially all size structure configurations. It is known that MRPP statistics are symmetric versions of a broader class of statistics, most of which are asymmetric. Because of the non-normality associated with null distributions of MRPP statistics, this paper includes necessary results for inferences based on the exact first three moments of anv statistic in this broader class (analogous to existing results for MRPP statistics).  相似文献   

9.
We derive an asymptotic theory of nonparametric estimation for a time series regression model Zt=f(Xt)+Wt, where {Xt} and {Zt} are observed nonstationary processes, and {Wt} is an unobserved stationary process. The class of nonstationary processes allowed for {Xt} is a subclass of the class of null recurrent Markov chains. This subclass contains the random walk, unit root processes and nonlinear processes. The process {Wt} is assumed to be linear and stationary.  相似文献   

10.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

11.
This paper gives an exposition of the use of the posterior likelihood ratio for testing point null hypotheses in a fully Bayesian framework. Connections between the frequentist P-value and the posterior distribution of the likelihood ratio are used to interpret and calibrate P-values in a Bayesian context, and examples are given to show the use of simple posterior simulation methods to provide Bayesian tests of common hypotheses.  相似文献   

12.
In this paper, we study the multi-class differential gene expression detection for microarray data. We propose a likelihood-based approach to estimating an empirical null distribution to incorporate gene interactions and provide a more accurate false-positive control than the commonly used permutation or theoretical null distribution-based approach. We propose to rank important genes by p-values or local false discovery rate based on the estimated empirical null distribution. Through simulations and application to lung transplant microarray data, we illustrate the competitive performance of the proposed method.  相似文献   

13.
We present new techniques for computing exact distributions of ‘Friedman-type’ statistics. Representing the null distribution by a generating function allows for the use of general, not necessarily integer-valued rank scores. Moreover, we use symmetry properties of the multivariate generating function to accelerate computations. The methods also work for cases with ties and for permutation statistics. We discuss some applications: the classical Friedman rank test, the normal scores test, the Friedman permutation test, the Cochran–Cox test and the Kepner–Robinson test. Finally, we shortly discuss self-made software for computing exact p-values.  相似文献   

14.
The multivariate skew-t distribution (J Multivar Anal 79:93–113, 2001; J R Stat Soc, Ser B 65:367–389, 2003; Statistics 37:359–363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew–normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763–771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.  相似文献   

15.
The length of the gap is the key factor affecting its reliability. Based on the mechanism of the gap null gate, this paper regards the two endpoint thresholds of the gap length as bivariate random variables and establishes successful response models. Score test statistic is presented to test the correlation coefficient. The DIC criterion is also provided to compare the models. With the experimental data of the gap null gate, we build Probit model and Logit model as the successful response models, and prove that the correlation coefficients in the both models can be regarded as 0. By comparing the DIC value, we find that the Probit model is more suitable to describe the distribution of the endpoint thresholds of the reliability window. Finally, both the point estimation and interval estimation results of the reliability window are given to illustrate the feasibility of the method shown in the paper.  相似文献   

16.
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m 0) to overcome this problem. Since m 0 is unknown, estimators of m 0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m 0, which is shown to overestimate m 0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m 0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings.  相似文献   

17.
18.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures.  相似文献   

19.
We are considered with the problem of m simultaneous statistical test problems with composite null hypotheses. Usually, marginal p-values are computed under least favorable parameter configurations (LFCs), thus being over-conservative under non-LFCs. Our proposed randomized p-value leads to a tighter exhaustion of the marginal (local) significance level. In turn, it is stochastically larger than the LFC-based p-value under alternatives. While these distributional properties are typically nonsensical for m  =1, the exhaustion of the local significance level is extremely helpful for cases with m>1m>1 in connection with data-adaptive multiple tests as we will demonstrate by considering multiple one-sided tests for Gaussian means.  相似文献   

20.
A residual-based test of the null of cointegration in panel data   总被引:2,自引:0,他引:2  
This paper proposes a residual-based Lagrange Multiplier (LM) test for the null of cointegration in panel data. The test is analogous to the locally best unbiased invariant (LBUI) for a moving average (MA) unit root. The asymptotic distribution of the test is derived under the null. Monte Carlo simulations are performed to study the size and power properties of the proposed test.

overall, the empirical sizes of the LM-FM and LM-DOLs are close to the true size even in small samples. The power is quite good for the panels where T ≥ 50, and decent with panels for fewer observation in T. In our fixed sample of N = 50 and T = 50, the presence of a moving average and correlation between the LM-DOLS test seems to be better at correcting these effects, although in some cases the LM-FM test is more powerful.

Although much of the non-stationary time series econometrics has been criticized for having more to do with the specific properties of the data set rather than underlying economic models, the recent development of the cointegration literature has allowed for a concrete bridge between economic long run theory and time series methods. Our test now allows for the testing of the null of cointegration in a panel setting and should be of considerable interest to economists in a wide variety of fields.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号