首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first population is the same as in the second. This posterior probability is compared with the p-value of the classical method, obtaining a reconciliation between both results, classical and Bayesian. The obtained results are generalized for r × s tables.  相似文献   

2.
Abstract

This article is concerned with the comparison of Bayesian and classical testing of a point null hypothesis for the Pareto distribution when there is a nuisance parameter. In the first stage, using a fixed prior distribution, the posterior probability is obtained and compared with the P-value. In the second case, lower bounds of the posterior probability of H0, under a reasonable class of prior distributions, are compared with the P-value. It has been shown that even in the presence of nuisance parameters for the model, these two approaches can lead to different results in statistical inference.  相似文献   

3.
This article is concerned with the comparison of P-value and Bayesian measure in point null hypothesis for the variance of Normal distribution with unknown mean. First, using fixed prior for test parameter, the posterior probability is obtained and compared with the P-value when an appropriate prior is used for the mean parameter. In the second, lower bounds of the posterior probability of H0 under a reasonable class of prior are compared with the P-value. It has been shown that even in the presence of nuisance parameters, these two approaches can lead to different results in the statistical inference.  相似文献   

4.
Abstract

Let the data from the ith treatment/population follow a distribution with cumulative distribution function (cdf) F i (x) = F[(x ? μ i )/θ i ], i = 1,…, k (k ≥ 2). Here μ i (?∞ < μ i  < ∞) is the location parameter, θ i i  > 0) is the scale parameter and F(?) is any absolutely continuous cdf, i.e., F i (?) is a member of location-scale family, i = 1,…, k. In this paper, we propose a class of tests to test the null hypothesis H 0 ? θ1 = · = θ k against the simple ordered alternative H A  ? θ1 ≤ · ≤ θ k with at least one strict inequality. In literature, use of sample quasi range as a measure of dispersion has been advocated for small sample size or sample contaminated by outliers [see David, H. A. (1981). Order Statistics. 2nd ed. New York: John Wiley, Sec. 7.4]. Let X i1,…, X in be a random sample of size n from the population π i and R ir  = X i:n?r  ? X i:r+1, r = 0, 1,…, [n/2] ? 1 be the sample quasi range corresponding to this random sample, where X i:j represents the jth order statistic in the ith sample, j = 1,…, n; i = 1,…, k and [x] is the greatest integer less than or equal to x. The proposed class of tests, for the general location scale setup, is based on the statistic W r  = max1≤i<jk (R jr /R ir ). The test is reject H 0 for large values of W r . The construction of a three-decision procedure and simultaneous one-sided lower confidence bounds for the ratios, θ j i , 1 ≤ i < j ≤ k, have also been discussed with the help of the critical constants of the test statistic W r . Applications of the proposed class of tests to two parameter exponential and uniform probability models have been discussed separately with necessary tables. Comparisons of some members of our class with the tests of Gill and Dhawan [Gill A. N., Dhawan A. K. (1999). A One-sided test for testing homogeneity of scale parameters against ordered alternative. Commun. Stat. – Theory and Methods 28(10):2417–2439] and Kochar and Gupta [Kochar, S. C., Gupta, R. P. (1985). A class of distribution-free tests for testing homogeneity of variances against ordered alternatives. In: Dykstra, R. et al., ed. Proceedings of the Conference on Advances in Order Restricted Statistical Inference at Iowa city. Springer Verlag, pp. 169–183], in terms of simulated power, are also presented.  相似文献   

5.
This paper aims to connect Bayesian analysis and frequentist theory in the context of multiple comparisons. The authors show that when testing the equality of two sample means, the posterior probability of the one‐sided alternative hypothesis, defined as a half‐space, shares with the frequentist P‐value the property of uniformity under the null hypothesis. Ultimately, the posterior probability may thus be used in the same spirit as a P‐value in the Benjamini‐Hochberg procedure, or in any of its extensions.  相似文献   

6.
Suppose that the length of time in years for which a business operates until failure has a Pareto distribution. Let t 1?t 2?t r denote the survival lifetimes of the first r of a random sample of n businesses. Bayesian predictions are to be made on the ordered failure times of the remaining (n???r) businesses, using the conditional probability function. Numerical examples are given to illustrate our results.  相似文献   

7.
For estimating an unknown parameter θ, we introduce and motivate the use of balanced loss functions of the form Lr, w, d0(q, d)=wr(d0, d)+ (1-w) r(q, d){L_{\rho, \omega, \delta_0}(\theta, \delta)=\omega \rho(\delta_0, \delta)+ (1-\omega) \rho(\theta, \delta)}, as well as the weighted version q(q) Lr, w, d0(q, d){q(\theta) L_{\rho, \omega, \delta_0}(\theta, \delta)}, where ρ(θ, δ) is an arbitrary loss function, δ 0 is a chosen a priori “target” estimator of q, w ? [0,1){\theta, \omega \in[0,1)}, and q(·) is a positive weight function. we develop Bayesian estimators under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω > 0 by relating such estimators to Bayesian solutions under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω = 0. Illustrations are given for various choices of ρ, such as absolute value, entropy, linex, and squared error type losses. Finally, under various robust Bayesian analysis criteria including posterior regret gamma-minimaxity, conditional gamma-minimaxity, and most stable, we establish explicit connections between optimal actions derived under balanced and unbalanced losses.  相似文献   

8.
《随机性模型》2013,29(1):31-42
Abstract

We give a sufficient condition for the exponential decay of the tail of a discrete probability distribution π = (π n ) n≥0 in the sense that lim n→∞(1/n) log∑ i>n π i  = ?θ with 0 < θ < ∞. We focus on analytic properties of the probability generating function of a discrete probability distribution, especially, the radius of convergence and the number of poles on the circle of convergence. Furthermore, we give an example of an M/G/1 type Markov chain such that the tail of its stationary distribution does not decay exponentially.  相似文献   

9.
Suppose that measurements Math', i = l,....,k, are consecutively taken on an individual at the prescribed costs Ci, i = l,....,k. the individual comes from one of the two populations H1 and H2, and it is desired to detect which population the individual belongs to. Given the loss incurreed in selecting population Hr when in fact it belongs to Hs, the prior probability Pr of Hr (r = 1,2), and assuming that Hr has the normal distribution N(µr, V), r = 1,2, we derive the sequential Bayesian solution of the discrimination problem when µ1, µ2 and V are known. When µr, V are unknown and must be estimated, we propose a solution which is asymptotic Bayesian with exponential convergence rate.  相似文献   

10.
Statistics R a based on power divergence can be used for testing the homogeneity of a product multinomial model. All R a have the same chi-square limiting distribution under the null hypothesis of homogeneity. R 0 is the log likelihood ratio statistic and R 1 is Pearson's X 2 statistic. In this article, we consider improvement of approximation of the distribution of R a under the homogeneity hypothesis. The expression of the asymptotic expansion of distribution of R a under the homogeneity hypothesis is investigated. The expression consists of continuous and discontinuous terms. Using the continuous term of the expression, a new approximation of the distribution of R a is proposed. A moment-corrected type of chi-square approximation is also derived. By numerical comparison, we show that both of the approximations perform much better than that of usual chi-square approximation for the statistics R a when a ≤ 0, which include the log likelihood ratio statistic.  相似文献   

11.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

12.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

13.
Let {X t , t ∈ ?} be a sequence of iid random variables with an absolutely continuous distribution. Let a > 0 and c ∈ ? be some constants. We consider a sequence of 0-1 valued variables {ξ t , t ∈ ?} obtained by clipping an MA(1) process X t  ? aX t?1 at the level c, i.e., ξ t  = I[X t  ? aX t?1 < c] for all t ∈ ?. We deal with the estimation problem in this model. Properties of the estimators of the parameters a and c, the success probability p, and the 1-lag autocorrelation r 1 are investigated. A numerical study is provided as an illustration of the theoretical results.  相似文献   

14.
We investigate the asymptotic behavior of the probability density function (pdf) and the cumulative distribution function (cdf) of Student's t-distribution with ν > 0 degrees of freedom (t ν for short) for ν tending to infinity when the argument x = x ν of the pdf (cdf) depends on ν and tends to ± ∞ (?∞). To this end, we consider the ratio of the pdf's (cdf's) of the t ν- and the standard normal distribution. Depending on the choice of the argument x ν, the pdf-ratio (cdf-ratio) tends to 1, a fixed value greater than 1, or to ∞. As a byproduct, we obtain a result for Mill' ratio when x ν → ?∞.  相似文献   

15.
Consider a sequence of independent and identically distributed random variables with an absolutely continuous distribution function. The probability functions of the numbers Kn,r and Nn,r of r-records up to time n of the first and second type, respectively, are obtained in terms of the non central and central signless Stirling numbers of the first kind. Also, the binomial moments of Kn,r and Nn,r are expressed in terms of the non central signless Stirling numbers of the first kind. The probability functions of the times Lk,r and Tk,r of the kth r-record of the first and second type, respectively, are deduced from those of Kn,r and Nn,r. A simple expression for the binomial moments of Tk,r is derived. Finally, the probability functions and binomial moments of the kth inter-r-record times Uk,r = Lk,r ? Lk?1,r and Wk,r = Tk,r ? Tk?1,r are obtained as sums of finite number of terms.  相似文献   

16.
Abstract

In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories. To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations. In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories. Two different subsampling procedures will be discussed, named “stratified” and “random” sampling. Although these two sampling methods lead to different probability distributions, it is shown that the likelihood inferences are identical. The MLE are derived and the situations with undefined MLE are examined in detail; it is shown that an unbiased MLE can be obtained by stratified sampling. Three different methods for constructing confidence upper limits are discussed; the Bayesian upper limit seems to be the most satisfactory. Our theoretical results are applied to two cases with r = 2 and k = 2 or 3, respectively.  相似文献   

17.
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961 Pratt , J. W. ( 1961 ). Length of confidence intervals . J. Amer. Statist. Assoc. 56 : 549657 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths.  相似文献   

18.
Under proper conditions, two independent tests of the null hypothesis of homogeneity of means are provided by a set of sample averages. One test, with tail probability P 1, relates to the variation between the sample averages, while the other, with tail probability P 2, relates to the concordance of the rankings of the sample averages with the anticipated rankings under an alternative hypothesis. The quantity G = P 1 P 2 is considered as the combined test statistic and, except for the discreteness in the null distribution of P 2, would correspond to the Fisher statistic for combining probabilities. Illustration is made, for the case of four means, on how to get critical values of G or critical values of P 1 for each possible value of P 2, taking discreteness into account. Alternative measures of concordance considered are Spearman's ρ and Kendall's τ. The concept results, in the case of two averages, in assigning two-thirds of the test size to the concordant tail, one-third to the discordant tail.  相似文献   

19.
ABSTRACT

Confidence intervals for the intraclass correlation coefficient (ρ) are used to determine the optimal allocation of experimental material in one-way random effects models. Designs that produce narrow intervals are preferred since they provide greater precision to estimate ρ. Assuming the total cost and the relative cost of the two stages of sampling are fixed, the authors investigate the number of classes and the number of individuals per class required to minimize the expected length of confidence intervals. We obtain results using asymptotic theory and compare these results to those obtained using exact calculations. The best design depends on the unknown value of ρ. Minimizing the maximum expected length of confidence intervals guards against worst-case scenarios. A good overall recommendation based on asymptotic results is to choose a design having classes of size 2 + √4 + 3r, where r is the relative cost of sampling at the class-level compared to the individual-level. If r = 0, then the overall cost is the sample size and the recommendation reduces to a design having classes of size 4.  相似文献   

20.
A sample of n subjects is observed in each of two states, S1-and S2. In each state, a subject is in one of two conditions, X or Y. Thus, a subject may be recorded as showing a change if its condition in the two states is ‘Y,X’ or ‘X,Y’ and, otherwise, the condition is unchanged. We consider a Bayesian test of the null hypothesis that the probability of an ‘X,Y’ change exceeds that of a ‘Y,X’ change by amount kO. That is, we develop the posterior distribution of kO, the difference between the two probabilities and reject the null hypothesis if k lies outside the appropriate posterior probability interval. The performance of the method is assessed by Monte Carlo and other numerical studies and brief tables of exact critical values are presented  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号