首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, the problem of testing the equality of coefficients of variation in a multivariate normal population is considered, and an asymptotic approach and a generalized p-value approach based on the concepts of generalized test variable are proposed. Monte Carlo simulation studies show that the proposed generalized p-value test has good empirical sizes, and it is better than the asymptotic approach. In addition, the problem of hypothesis testing and confidence interval for the common coefficient variation of a multivariate normal population are considered, and a generalized p-value and a generalized confidence interval are proposed. Using Monte Carlo simulation, we find that the coverage probabilities and expected lengths of this generalized confidence interval are satisfactory, and the empirical sizes of the generalized p-value are close to nominal level. We illustrate our approaches using a real data.  相似文献   

2.
In this article, we focus on the one-sided hypothesis testing for the univariate linear calibration, where a normally distributed response variable and an explanatory variable are involved. The observations of the response variable corresponding to known values of the explanatory variable are used to make inferences on a single unknown value of the explanatory variable. We apply the generalized inference to the calibration problem, and take the generalized p-value as the test statistic to develop a new p-value for one-sided hypothesis testing, which we refer to as the one-sided posterior predictive p-value. The behavior of the one-sided posterior predictive p-value is numerically compared with that of the generalized p-value, and simulations show that the proposed p-value is quite satisfactory in the frequentist performance.  相似文献   

3.
This paper proposes a variable selection method for detecting abnormal items based on the T2 test when the observations on abnormal items are available. Based on the unbiased estimates of the powers for all subsets of variables, the variable selection method selects the subset of variables that maximizes the power estimate. Since more than one subsets of variables maximize the power estimate frequently, the averaged p-value of the rejected items is used as a second criterion. Although the performance of the method depends on the sample size for the abnormal items and the true power values for all subsets of variables, numerical experiments show the effectiveness of the proposed method. Since normal and abnormal items are simulated using one-factor and two-factor models, basic properties of the power functions for the models are investigated.  相似文献   

4.
This paper considers p-value based step-wise rejection procedures for testing multiple hypotheses. The existing procedures have used constants as critical values at all steps. With the intention of incorporating the exact magnitude of the p-values at the earlier steps into the decisions at the later steps, this paper applies a different strategy that the critical values at the later steps are determined as functions of the p-values from the earlier steps. As a result, we have derived a new equality and developed a two-step rejection procedure following that. The new procedure is a short-cut of a step-up procedure, and it possesses great simplicity. In terms of power, the proposed procedure is generally comparable to the existing ones and exceptionally superior when the largest p-value is anticipated to be less than 0.5.  相似文献   

5.
In the framework of null hypothesis significance testing for functional data, we propose a procedure able to select intervals of the domain imputable for the rejection of a null hypothesis. An unadjusted p-value function and an adjusted one are the output of the procedure, namely interval-wise testing. Depending on the sort and level α of type-I error control, significant intervals can be selected by thresholding the two p-value functions at level α. We prove that the unadjusted (adjusted) p-value function point-wise (interval-wise) controls the probability of type-I error and it is point-wise (interval-wise) consistent. To enlighten the gain in terms of interpretation of the phenomenon under study, we applied the interval-wise testing to the analysis of a benchmark functional data set, i.e. Canadian daily temperatures. The new procedure provides insights that current state-of-the-art procedures do not, supporting similar advantages in the analysis of functional data with less prior knowledge.  相似文献   

6.
Abstract

In statistical hypothesis testing, a p-value is expected to be distributed as the uniform distribution on the interval (0, 1) under the null hypothesis. However, some p-values, such as the generalized p-value and the posterior predictive p-value, cannot be assured of this property. In this paper, we propose an adaptive p-value calibration approach, and show that the calibrated p-value is asymptotically distributed as the uniform distribution. For Behrens–Fisher problem and goodness-of-fit test under a normal model, the calibrated p-values are constructed and their behavior is evaluated numerically. Simulations show that the calibrated p-values are superior than original ones.  相似文献   

7.
In testing statistical hypotheses, as in other statistical problems, we may be confronted with fuzzy concepts. This paper deals with the problem of testing hypotheses, when the hypotheses are fuzzy and the data are crisp. We first introduce the notion of fuzzy p-value, by applying the extension principle and then present an approach for testing fuzzy hypotheses by comparing a fuzzy p-value and a fuzzy significance level, based on a comparison of two fuzzy sets. Numerical examples are also provided to illustrate the approach.  相似文献   

8.
While it is often argued that a p-value is a probability; see Wasserstein and Lazar, we argue that a p-value is not defined as a probability. A p-value is a bijection of the sufficient statistic for a given test which maps to the same scale as the Type I error probability. As such, the use of p-values in a test should be no more a source of controversy than the use of a sufficient statistic. It is demonstrated that there is, in fact, no ambiguity about what a p-value is, contrary to what has been claimed in recent public debates in the applied statistics community. We give a simple example to illustrate that rejecting the use of p-values in testing for a normal mean parameter is conceptually no different from rejecting the use of a sample mean. The p-value is innocent; the problem arises from its misuse and misinterpretation. The way that p-values have been informally defined and interpreted appears to have led to tremendous confusion and controversy regarding their place in statistical analysis.  相似文献   

9.
Stepwise methods for variable selection are frequently used to determine the predictors of an outcome in generalized linear models. Although it is widely used within the scientific community, it is well known that the tests on the explained deviance of the selected model are biased. This arises from the fact that the traditional test statistics upon which these methods are based were intended for testing pre-specified hypotheses; instead, the tested model is selected through a data-driven procedure. A multiplicity problem therefore arises. In this work, we define and discuss a nonparametric procedure to adjust the p-value of the selected model of any stepwise selection method. The unbiasedness and consistency of the method is also proved. A simulation study shows the validity of this procedure. Theoretical differences with previous works in the same field are also discussed.  相似文献   

10.
In this article, we consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors, in the presence of outliers. Since the LASSO is a special case of the penalized least-square regression with L1 penalty function, it suffers from the heavy-tailed errors and/or outliers in data. Recently, Least Absolute Deviation (LAD) and the LASSO methods have been combined (the LAD-LASSO regression method) to carry out robust parameter estimation and variable selection simultaneously for a multiple linear regression model. However, variable selection of the functional predictors based on LASSO fails since multiple parameters exist for a functional predictor. Therefore, group LASSO is used for selecting functional predictors since group LASSO selects grouped variables rather than individual variables. In this study, we propose a robust functional predictor selection method, the LAD-group LASSO, for a functional linear regression model with a scalar response and functional predictors. We illustrate the performance of the LAD-group LASSO on both simulated and real data.  相似文献   

11.
For the sign testing problem about the normal variances, we develop the heuristic testing procedure based on the concept of generalized test variable and generalized p-value. A detailed simulation study is conducted to empirically investigate the performance of the proposed method. Through the simulation study, especially in small sample sizes, the proposed test not only adequately controls empirical size at the nominal level, but also uniformly more powerful than likelihood ratio test, Gutmann's test, Li and Sinha's test and Liu and Chan's test, showing that the proposed method can be recommended in practice. The proposed method is illustrated with the published data.  相似文献   

12.
ABSTRACT

This paper extends the classical methods of analysis of a two-way contingency table to the fuzzy environment for two cases: (1) when the available sample of observations is reported as imprecise data, and (2) the case in which we prefer to categorize the variables based on linguistic terms rather than as crisp quantities. For this purpose, the α-cuts approach is used to extend the usual concepts of the test statistic and p-value to the fuzzy test statistic and fuzzy p-value. In addition, some measures of association are extended to the fuzzy version in order to evaluate the dependence in such contingency tables. Some practical examples are provided to explain the applicability of the proposed methods in real-world problems.  相似文献   

13.
In this article, we introduce two goodness-of-fit tests for testing normality through the concept of the posterior predictive p-value. The discrepancy variables selected are the Kolmogorov-Smirnov (KS) and Berk-Jones (BJ) statistics and the prior chosen is Jeffreys’ prior. The constructed posterior predictive p-values are shown to be distributed independently of the unknown parameters under the null hypothesis, thus they can be taken as the test statistics. It emerges from the simulation that the new tests are more powerful than the corresponding classical tests against most of the alternatives concerned.  相似文献   

14.
ABSTRACT

We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm—and the p-value thresholds intrinsic to it—as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to “ban” p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly.  相似文献   

15.
This paper deals with the problem of testing statistical hypotheses when both the hypotheses and data are fuzzy. To this end, we first introduce the concept of fuzzy p-value and then develop an approach for testing fuzzy hypotheses by comparing a fuzzy p-value and a fuzzy significance level. Numerical examples are provided to illustrate the approach for different cases.  相似文献   

16.
We are considered with the problem of m simultaneous statistical test problems with composite null hypotheses. Usually, marginal p-values are computed under least favorable parameter configurations (LFCs), thus being over-conservative under non-LFCs. Our proposed randomized p-value leads to a tighter exhaustion of the marginal (local) significance level. In turn, it is stochastically larger than the LFC-based p-value under alternatives. While these distributional properties are typically nonsensical for m  =1, the exhaustion of the local significance level is extremely helpful for cases with m>1m>1 in connection with data-adaptive multiple tests as we will demonstrate by considering multiple one-sided tests for Gaussian means.  相似文献   

17.
We consider seven exact unconditional testing procedures for comparing adjusted incidence rates between two groups from a Poisson process. Exact tests are always preferable due to the guarantee of test size in small to medium sample settings. Han [Comparing two independent incidence rates using conditional and unconditional exact tests. Pharm Stat. 2008;7(3):195–201] compared the performance of partial maximization p-values based on the Wald test statistic, the likelihood ratio test statistic, the score test statistic, and the conditional p-value. These four testing procedures do not perform consistently, as the results depend on the choice of test statistics for general alternatives. We consider the approach based on estimation and partial maximization, and compare these to the ones studied by Han (2008) for testing superiority. The procedures are compared with regard to the actual type I error rate and power under various conditions. An example from a biomedical research study is provided to illustrate the testing procedures. The approach based on partial maximization using the score test is recommended due to the comparable performance and computational advantage in large sample settings. Additionally, the approach based on estimation and partial maximization performs consistently for all the three test statistics, and is also recommended for use in practice.  相似文献   

18.
A geometrical interpretation of the classical tests of the relation between two sets of variables is presented. One of the variable sets may be considered as fixed and then we have a multivariate regression model. When the Wilks’ lambda distribution is viewed geometrically it is obvious that the two special cases, theF distribution and the HotellingT 2 distribution are equivalent. From the geometrical perspective it is also obvious that the test statistic and thep-value are unchanged if the responses and the predictors are interchanged.  相似文献   

19.
Consider using values of variables X 1, X 2,…, X p to classify entities into one of two classes. Kernel-based procedures such as support vector machines (SVMs) are well suited for this task. In general, the classification accuracy of SVMs can be substantially improved if instead of all p candidate variables, a smaller subset of (say m) variables is used. A new two-step approach to variable selection for SVMs is therefore proposed: best variable subsets of size k = 1,2,…, p are first identified, and then a new data-dependent criterion is used to determine a value for m. The new approach is evaluated in a Monte Carlo simulation study, and on a sample of data sets.  相似文献   

20.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号