首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
A F0RTRAN-77 subroutine for a general version of multi-response permutation procedures (MRPP) is described. The exact four moments are employed in conjunction with the Pearson type I, type III, and type VI distributions to calculate the associated P-values.  相似文献   

2.
In this paper we consider the problem of constructing exact confidence intervals for a common location of several truncated exponentials with unknown and unequal scale parameters. These inteivals are based on combining suitable pivots as well as the so-called P-values.  相似文献   

3.
Pre-estimation is a technique for adjusting a standard approximate P-value to be close to exact. While conceptually simple, it can become computationally intensive. Second order pivotals [N. Reid, Asymptotics and the theory of inference, Ann. Statist. 31 (2003), pp. 1695–1731] are constructed to be closer to exact than standard approximate pivotals. The theory behind these pivotals is complex, and their properties are unclear for discrete models. However, since they are typically given in closed form they are easy to compute. For the special case of non-inferiority trials, we investigate Wald, Score, likelihood ratio and second order pivotals. Each of the basic pivotals are used to generate an exact test by maximising with respect to the nuisance parameter. We also study the effect of pre-estimating the nuisance parameter, as described in Lloyd [C.J. Lloyd, Exact P-values for discrete models obtained by estimation and maximisation, Aust. N. Z. J. Statist. 50 (2008), pp. 329–346]. It appears that second order methods are not as close to exact as might have been hoped. On the other hand, P-values, based on pre-estimation are very close to exact, are more powerful than competitors and are hardly affected by the basic generating statistic chosen.  相似文献   

4.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

5.
The most common asymptotic procedure for analyzing a 2 × 2 table (under the conditioning principle) is the ‰ chi-squared test with correction for continuity (c.f.c). According to the way this is applied, up to the present four methods have been obtained: one for one-tailed tests (Yates') and three for two-tailed tests (those of Mantel, Conover and Haber). In this paper two further methods are defined (one for each case), the 6 resulting methods are grouped in families, their individual behaviour studied and the optimal is selected. The conclusions are established on the assumption that the method studied is applied indiscriminately (without being subjected to validity conditions), and taking a basis of 400,000 tables (with the values of sample size n between 20 and 300 and exact P-values between 1% and 10%) and a criterion of evaluation based on the percentage of times in which the approximate P-value differs from the exact (Fisher's exact test) by an excessive amount. The optimal c.f.c. depends on n, on E (the minimum quantity expected) and on the error α to be used, but the rule of selection is not complicated and the new methods proposed are frequently selected. In the paper we also study what occurs when E ≥ 5, as well as whether the chi-squared by factor (n-1).  相似文献   

6.
P-values are useful statistical measures of evidence against a null hypothesis. In contrast to other statistical estimates, however, their sample-to-sample variability is usually not considered or estimated, and therefore not fully appreciated. Via a systematic study of log-scale p-value standard errors, bootstrap prediction bounds, and reproducibility probabilities for future replicate p-values, we show that p-values exhibit surprisingly large variability in typical data situations. In addition to providing context to discussions about the failure of statistical results to replicate, our findings shed light on the relative value of exact p-values vis-a-vis approximate p-values, and indicate that the use of *, **, and *** to denote levels .05, .01, and .001 of statistical significance in subject-matter journals is about the right level of precision for reporting p-values when judged by widely accepted rules for rounding statistical estimates.  相似文献   

7.
Bayesian approach to inference for Markov chains (MC) has many advantages over classical approach. This paper discusses how tests for one-sided and two-sided hypotheses involving two or more parameters of finite Markov chains can be carried out. The posterior probabilities (P-values), Bayes factors, highest density regions (HDR) and central credible sets (CCS) and other measures are calculated for uniform and umbrella pattern prior distributions and for several functions of the two parameters in a two-state Markov chain. A numerical example is also worked out.  相似文献   

8.
Serial testing is introduced as a general method for carryig out a large umber of hypothesis tests , while maintaining a reasonable bound on the simultaneous Type I error probabilities . The power properties of serial testing differ from those of other simultaneous test procedures, and serial P-values a reassociated with the models under consideration.  相似文献   

9.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

10.
The problem of testing homogeneity of several group means is considered against some patterned alternatives for the one-way classified data. The patterns of interest include the simple-tree and the trend alternatives. The approach is to begin with some suitably defined one-sample confidence intervals for the groups in a graphical display. Depending on the pattern of interest, orientation features of the display are examined, more formally, using proposed overall tests or rules. In the classical setup under normality, the case of known common variance is treated in detail; extensions to the case of unknown variance are indicated. When normality is in doubt, a nonparametric procedure based on the sign test is proposed. The necessary critical values are percentiles of either a multivariate normal distribution or a multivariate t-distribution. Although some existing tables can be used for the critical values (or the P-values) in some special cases, in general, the use of simulations is recommended and the steps are detailed in the appendix. An illustrative numerical example is provided.  相似文献   

11.
The posterior distribution of the likelihood is used to interpret the evidential meaning of P-values, posterior Bayes factors and Akaike's information criterion when comparing point null hypotheses with composite alternatives. Asymptotic arguments lead to simple re-calibrations of these criteria in terms of posterior tail probabilities of the likelihood ratio. (Prior) Bayes factors cannot be calibrated in this way as they are model-specific.  相似文献   

12.
A right-censored ranking is what results when a judge ranks only the “top K” of M objects. Complete uncensored rankings constitute a special case. We present two measures of concordance among the rankings of N ≥ 2 such judges, both based on Spearman's footrule. One measure is unweighted, while the other gives greatest weight to the first rank, less to the second, and so on. We consider methods for calculating or estimating the P-values of the corresponding tests of the hypothesis of random ranking.  相似文献   

13.
A model is presented to generate a distribution for the probability of an ACR response at six months for a new treatment for rheumatoid arthritis given evidence from a one- or three-month clinical trial. The model is based on published evidence from 11 randomized controlled trials on existing treatments. A hierarchical logistic regression model is used to find the relationship between the proportion of patients achieving ACR20 and ACR50 at one and three months and the proportion at six months. The model is assessed by Bayesian predictive P-values that demonstrate that the model fits the data well. The model can be used to predict the number of patients with an ACR response for proposed six-month clinical trials given data from clinical trials of one or three months duration.  相似文献   

14.
Many methods are available for computing a confidence interval for the binomial parameter, and these methods differ in their operating characteristics. It has been suggested in the literature that the use of the exact likelihood ratio (LR) confidence interval for the binomial proportion should be considered. This paper provides an evaluation of the operating characteristics of the two‐sided exact LR and exact score confidence intervals for the binomial proportion and compares these results to those for three other methods that also strictly maintain nominal coverage: Clopper‐Pearson, Blaker, and Casella. In addition, the operating characteristics of the two‐sided exact LR method and exact score method are compared with those of the corresponding asymptotic methods to investigate the adequacy of the asymptotic approximation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case.  相似文献   

16.
Whereas large-sample properties of the estimators of survival distributions using censored data have been studied by many authors, exact results for small samples have been difficult to obtain. In this paper we obtain the exact expression for the ath moment (a > 0) of the Bayes estimator of survival distribution using the censored data under proportional hazard model. Using the exact expression we compute the exact mean, variance and MSE of the Bayes estimator. Also two estimators ofthe mean survival time based on the Kaplan-Meier estimator and the Bayes estimator are compared for small samples under proportional hazards.  相似文献   

17.
The classical adjustments for the inadequacy of the asymptotic distribution of Pearson's X2 statistic, when some cells are sparse or the cell expectations are small, use continuity corrections and exact moments; the recent approach is to use computer based ‘exact inference’. In this paper we observe that the original exact test due to Freeman and Halton (Biometrika 38 (1951), 141–149) and its computer implementation are theoretically unsound. Furthermore, the corrected algorithmic version for the exact p-value in StatXact is practically useful in very few cases, and the results of its present version which includes Monte Carlo estimates can be highly variable. We then derive asymptotic expansions for the moments of the null distribution of Pearson's X2, introduce a new method of correcting for discreteness and finite range of Pearson's X2 as an alternative to the classical continuity correction, and use them to construct new and improved approximations for the null distribution. We also offer diagnostic criteria applicable to the tables for selecting an appropriate approximation. The exact methods and the competing approximations are studied and compared using thirteen test cases from the literature. It is concluded that the accuracy of the appropriate approximation is comparable with the truly exact method whenever it is available. The use of approximations is therefore preferable if the truly exact computer intensive solutions are unavailable or infeasible.  相似文献   

18.
Summary Robust Bayesian analysis deals simultaneously with a class of possible prior distributions, instead of a single distribution. This paper concentrates on the surprising results that can be obtained when applying the theory to problems of testing precise hypotheses when the “objective” class of prior distributions is assumed. First, an example is given demonstrating the serious inadequacy of P-values for this problem. Next, it is shown how the approach can provide statistical quantification of Occam's Razor, the famous principle of science that advocates choice of the simpler of two hypothetical explanations of data. Finally, the theory is applied to multinomial testing. Research supported by the National Science Foundation, Grant DMS-8923071, and by NASA Contract NAS5-29285 for the hubble Space Telescope.  相似文献   

19.
Recently, exact inference under hybrid censoring scheme has attracted extensive attention in the field of reliability analysis. However, most of the authors neglect the possibility of competing risks model. This paper mainly discusses the exact likelihood inference for the analysis of generalized type-I hybrid censoring data with exponential competing failure model. Based on the maximum likelihood estimates for unknown parameters, we establish the exact conditional distribution of parameters by conditional moment generating function, and then obtain moment properties as well as exact confidence intervals (CIs) for parameters. Furthermore, approximate CIs are constructed by asymptotic distribution and bootstrap method as well. We also compare their performances with exact method through the use of Monte Carlo simulations. And finally, a real data set is analysed to illustrate the validity of all the methods developed here.  相似文献   

20.
MRBP tests were proposed by Mielke and Iyer (1982) to analyze multivariate data for the randomized block design, based on permutation procedures. They obtained the first three exact moments of the MRBP test statistic to approximate its permutation distribution. Tracy and Khan (1991) derived its fourth exact moment, to obtain a better approximating distribution, when there are four or more treatments. In this paper we obtain the fourth exact moment when the number of treatments is less than four.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号