首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper compares the Bayesian and frequentist approaches to testing a one-sided hypothesis about a multivariate mean. First, this paper proposes a simple way to assign a Bayesian posterior probability to one-sided hypotheses about a multivariate mean. The approach is to use (almost) the exact posterior probability under the assumption that the data has multivariate normal distribution, under either a conjugate prior in large samples or under a vague Jeffreys prior. This is also approximately the Bayesian posterior probability of the hypothesis based on a suitably flat Dirichlet process prior over an unknown distribution generating the data. Then, the Bayesian approach and a frequentist approach to testing the one-sided hypothesis are compared, with results that show a major difference between Bayesian reasoning and frequentist reasoning. The Bayesian posterior probability can be substantially smaller than the frequentist p-value. A class of example is given where the Bayesian posterior probability is basically 0, while the frequentist p-value is basically 1. The Bayesian posterior probability in these examples seems to be more reasonable. Other drawbacks of the frequentist p-value as a measure of whether the one-sided hypothesis is true are also discussed.  相似文献   

2.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

3.
Conventional Phase II statistical process control (SPC) charts are designed using control limits; a chart gives a signal of process distributional shift when its charting statistic exceeds a properly chosen control limit. To do so, we only know whether a chart is out-of-control at a given time. It is therefore not informative enough about the likelihood of a potential distributional shift. In this paper, we suggest designing the SPC charts using p values. By this approach, at each time point of Phase II process monitoring, the p value of the observed charting statistic is computed, under the assumption that the process is in-control. If the p value is less than a pre-specified significance level, then a signal of distributional shift is delivered. This p value approach has several benefits, compared to the conventional design using control limits. First, after a signal of distributional shift is delivered, we could know how strong the signal is. Second, even when the p value at a given time point is larger than the significance level, it still provides us useful information about how stable the process performs at that time point. The second benefit is especially useful when we adopt a variable sampling scheme, by which the sampling time can be longer when we have more evidence that the process runs stably, supported by a larger p value. To demonstrate the p value approach, we consider univariate process monitoring by cumulative sum control charts in various cases.  相似文献   

4.
For series systems with k components it is assumed that the cause of failure is known to belong to one of the 2k − 1 possible subsets of the failure-modes. The theoretical time to failure due to k causes are assumed to have independent Weibull distributions with equal shape parameters. After finding the MLEs and the observed information matrix of (λ1, …, λk, β), a prior distribution is proposed for (λ1, …, λk), which is shown to yield a scale-invariant noninformative prior as well. No particular structure is imposed on the prior of β. Methods to obtain the marginal posterior distributions of the parameters and other parametric functions of interest and their Bayesian point and interval estimates are discussed. The developed techniques are illustrated using a numerical example.  相似文献   

5.
In this paper, we consider experimental situations in which a regular fractional factorial design is to be used to study the effects of m two-level factors using n=2mk experimental units arranged in 2p blocks of size 2mkp. In such situations, two-factor interactions are often confounded with blocks and complete information is lost on these two-factor interactions. Here we consider the use of the foldover technique in conjunction with combining designs having different blocking schemes to produce alternative partially confounded blocked fractional factorial designs that have more estimable two-factor interactions or a higher estimation capacity or both than their traditional counterparts.  相似文献   

6.
We prove that if pr and pr ? 1 are both prime powers then there is a generalized Hadamard matrix of order pr(pr ? 1) with elements from the elementary abelian group Zp x?x Zp. This result was motivated by results of Rajkundia on BIBD's. This result is then used to produce pr ? 1 mutually orthogonal F-squares F(pr(pr ? 1); pr ? 1).  相似文献   

7.
We provide general conditions to ensure the valid Laplace approximations to the marginal likelihoods under model misspecification, and derive the Bayesian information criteria including all terms of order Op(1). Under conditions in theorem 1 of Lv and Liu [J. R. Statist. Soc. B, 76, (2014), 141–167] and a continuity condition for prior densities, asymptotic expansions with error terms of order op(1) are derived for the log-marginal likelihoods of possibly misspecified generalized linear models. We present some numerical examples to illustrate the finite sample performance of the proposed information criteria in misspecified models.  相似文献   

8.
A new statistic, (p), is developed for variable selection in a system-of-equations model. The standardized total mean square error in the (p)statistic is weighted by the covariance matrix of dependent variables instead of the error covariance matrix of the true model as in the original definition. The new statistic can be also used for model selection in the non-nested models. The estimate of (p), SC(p), is derived and shown to become SCε(p) in the similar form of Cp in a single-equation model when the covariance matrix of sampled dependent variables is replaced by the error covariance matrix under the full model.  相似文献   

9.
Likelihood is widely used in statistical applications, both for the full parameter by obvious direct calculation and for component interest parameters by recent asymptotic theory. Often, however, we want more detailed information concerning an inference procedure, information such as say the distribution function of a measure of departure which would then permit power calculations or a detailed display of p-values for a range of parameter values. We investigate how such distribution function approximations can be obtained from minimal information concerning the likelihood function, a minimum that is often available in many applications. The resulting expressions clearly indicate the source of the various ingredients from likelihood, and they also provide a basis for understanding how nonnormality of the likelihood function affects related p-values. Moreover they provide the basis for removing a computational singularity that arises near the maximum likelihood value when recently developed significance function formulas are used.  相似文献   

10.
An explicit expression for the characteristic polynomial of the information matrix for a balanced fractional sm factorial design of resolution Vp, q (in particular, when p = q = s − 1, of resolution V) is obtained by utilizing the decomposition of a multidimensional relationship algebra into its four two-sided ideals. Furthermore, by use of the algebraic structure of the underlying multidimensional relationship, the trace and the determinant of the covariance matrix of the estimates of effects to be interest are derived.  相似文献   

11.
Consider two or more treatments with dichotomous responses. The total number N of experimental units are to be allocated in a fixed number r of stages. The problem is to decide how many units to assign to each treatment in each stage. Responses from selections in previous stages are available and can be considered but responses in the current stage are not available until the next group of selections is made. Information is updated via the Bayes theorem after each stage. The goal is to maximize the overall expected number of successes in the N units.Two forms of prior information are considered: (i) All arms have beta priors, and (ii) prior distributions have continuous densities. Various characteristics of optimal decisions are presented. For example, in most cases of (i) and (ii), the rate of the optimal size of the first stage cannot be greater than √N when r = 2.  相似文献   

12.
We study the asymptotics of Lp-estimators, p>0, as estimates of a parameter of location for data coming for a symmetric density with an infinity cusp at the center of symmetry of the distribution. In this situation, the data are more concentrated around the parameter of location than in usual cases. The maximum-likelihood estimator is not defined. The rates of convergence of the Lp-estimators in this situation depend on p and on the shape of the density. For some densities and small values of p, the Lp-estimator converges with a fast rate of convergence.  相似文献   

13.
Spline smoothing is a popular technique for curve fitting, in which selection of the smoothing parameter is crucial. Many methods such as Mallows’ Cp, generalized maximum likelihood (GML), and the extended exponential (EE) criterion have been proposed to select this parameter. Although Cp is shown to be asymptotically optimal, it is usually outperformed by other selection criteria for small to moderate sample sizes due to its high variability. On the other hand, GML and EE are more stable than Cp, but they do not possess the same asymptotic optimality as Cp. Instead of selecting this smoothing parameter directly using Cp, we propose to select among a small class of selection criteria based on Stein's unbiased risk estimate (SURE). Due to the selection effect, the spline estimate obtained from a criterion in this class is nonlinear. Thus, the effective degrees of freedom in SURE contains an adjustment term in addition to the trace of the smoothing matrix, which cannot be ignored in small to moderate sample sizes. The resulting criterion, which we call adaptive Cp, is shown to have an analytic expression, and hence can be efficiently computed. Moreover, adaptive Cp is not only demonstrated to be superior and more stable than commonly used selection criteria in a simulation study, but also shown to possess the same asymptotic optimality as Cp.  相似文献   

14.
The problem of testing linear AR(p1) against diagonal bilinear BL(p1, 0; p2, p2) dependence is considered. Emphasis is put on local asymptotic optimality and the nonspecification of innovation densities. The tests we are deriving are asymptotically valid under a large class of densities, and locally asymptotically most stringent at some selected density f. They rely on generalized versions of residual autocorrelations (the spectrum), and generalized versions of the so-called cubic autocorrelations (the bispectrum). Local powers are explicitly provided. The local power of the Gaussian Lagrange multipliers method follows as a particular case.  相似文献   

15.
A series of weakly resolvable search designs for the pn factorial experiment is given for which the mean and all main effects are estimable in the presence of any number of two-factor interactions and for which any combination of three or fewer pairs of factors that interact may be detected. The designs have N = p(p–1)n+p runs except in one case where additional runs are required for detection and one case where (p?1)2 additional runs are needed to estimate all (p–1)2 degrees of freedom for each pair of detected interactions. The detection procedure is simple enough that computations can be carried out with hand calculations.  相似文献   

16.
In this paper, we give a new construction for transitive Kirkman triple systems. As a consequence, it is shown that, to prove the existence of a TKTS(ν) for each admissible ν, it is sufficient to prove the existence of a TKTS(3p) where p is a prime, p ≡ 5 (mod 6).  相似文献   

17.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

18.
Two fractional factorial designs are considered isomorphic if one can be obtained from the other by relabeling the factors, reordering the runs, and/or switching the levels of factors. To identify the isomorphism of two designs is known as an NP hard problem. In this paper, we propose a three-dimensional matrix named the letter interaction pattern matrix (LIPM) to characterize the information contained in the defining contrast subgroup of a regular two-level design. We first show that an LIPM could uniquely determine a design under isomorphism and then propose a set of principles to rearrange an LIPM to a standard form. In this way, we can significantly reduce the computational complexity in isomorphism check, which could only take O(2p)+O(3k3)+O(2k) operations to check two 2kp designs in the worst case. We also find a sufficient condition for two designs being isomorphic to each other, which is very simple and easy to use. In the end, we list some designs with the maximum numbers of clear or strongly clear two-factor interactions which were not found before.  相似文献   

19.
In many estimation problems the parameter of interest is known,a priori, to belong to a proper subspace of the natural parameter space. Although useful in practice this type of additional information can lead to surprising theoretical difficulties. In this paper the problem of minimax estimation of a Bernoulli pwhen pis restricted to a symmetric subinterval of the natural parameter space is considered. For the sample sizes n = 1,2,3, and 4 least favorable priors with finite support are provided and the corresponding Bayes estimators are shown to be minimax. For n = 5 and 6 the usual constant risk minimax estimator is shown to be the Bayes minimax estimator corresponding to a least favorable prior with finite support, provided the restriction on the parameter space is not too tight.  相似文献   

20.
In this work, the problem of transformation and simultaneous variable selection is thoroughly treated via objective Bayesian approaches by the use of default Bayes factor variants. Four uniparametric families of transformations (Box–Cox, Modulus, Yeo-Johnson and Dual), denoted by T, are evaluated and compared. The subjective prior elicitation for the transformation parameter \(\lambda _T\), for each T, is not a straightforward task. Additionally, little prior information for \(\lambda _T\) is expected to be available, and therefore, an objective method is required. The intrinsic Bayes factors and the fractional Bayes factors allow us to incorporate default improper priors for \(\lambda _T\). We study the behaviour of each approach using a simulated reference example as well as two real-life examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号