首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Independent random samples are selected from each of a set of N independent populations, P1,…,Pn. Interest centers around comparing N (unknown) scalar parameters θ1,…,θN associated respectively with the N populations P1,…,Pn. Procedures are constructed for estimating the magnitude of each of the differences δt,j = θi ? θj (1 ≤ i,j ≤ N) between pairs of populations. A loss function which adopts appropriate penalties for magnitude errors in estimation of differences is constructed. Magnitude estimators of differences are called transitive if they give rise to a transitive (i.e., consistent) relationship between pairwise differences of parameters. We show how to construct optimal effcient transitive magnitude–estimation procedures and demonstrate their usefulness through an example involving estimating the magnitude of the differences between disease incidence in paired towns for different pairs. Optimal transitive pairwise–comparison procedures are optimum (i.e., have the smallest posterior Bayes risks) in the class of all transitive pairwise–comparison procedures; as such they replace classical Bayes procedures which are usually not transitive when the number N of parameters compared is large. The posterior Bayes risk of optimal transitive pairwise comparison procedures are compared with that for alternative ‘adapted’ procedures, constructed from optimal simultaneous estimators and adapted for the purpose of pairwise comparisons. It is shown that the optimal transitive pairwise comparison procedures dominated the adapted procedures (in posterior Bayes risk) and typically represent only a small increase in posterior risk over the classical Bayes procedures which generally fail to be consistent. Optimal Bayes procedures are shown, for large numbers of parameters to be reasonably easy to construct using the algorithms outlined in this paper  相似文献   

2.
We consider the problem of comparing step-down and step-up multiple test procedures for testing n hypotheses when independent p-values or independent test statistics are available. The defining critical values of these procedures for independent test statistics are asymptotically equal, which yields a theoretical argument for the numerical observation that the step-up procedure is mostly more powerful than the step-down procedure. The main aim of this paper is to quantify the differences between the critical values more precisely. As a by-product we also obtain more information about the gain when we consider two subsequent steps of these procedures. Moreover, we investigate how liberal the step-up procedure becomes when the step-up critical values are replaced by their step-down counterparts or by more refined approximate values. The results for independent p-values are the basis for obtaining corresponding results when independent real-valued test statistics are at hand. It turns out that the differences of step-down and step-up critical values as well as the differences between subsequent steps tend to zero for many distributions, except for heavy-tailed distributions. The Cauchy distribution yields an example where the critical values of both procedures are nearly linearly increasing in n.  相似文献   

3.
Suppose that the k treatments under comparison are ordered in a certain way. For example, there may be a sequence of increasing dose levels of a drug. It is interesting to look directly at the successive differences between the treatment effects μi's, namely the set of differences μ2μ1,μ3μ2,…,μkμk−1. In particular, directional inferences on whether μi<μi+1 or μi>μi+1 for i=1,…,k−1 are useful. Lee and Spurrier (J. Statist. Plann. Inference 43 (1995) 323) present a one- and a two-sided confidence interval procedures for making successive comparisons between treatments. In this paper, we develop a new procedure which is sharper than both the one- and two-sided procedures of Lee and Spurrier in terms of directional inferences. This new procedure is able to make more directional inferences than the two-sided procedure and maintains the inferential sensitivity of the one-sided procedure. Note however this new procedure controls only type III error, but not type I error. The critical point of the new procedure is the same as that of Lee and Spurrier's one-sided procedure. We also propose a power function for the new procedure and determine the sample size necessary for a guaranteed power level. The application of the procedure is illustrated with an example.  相似文献   

4.
Consider k independent random samples such that ith sample is drawn from a two-parameter exponential population with location parameter μi and scale parameter θi,?i = 1, …, k. For simultaneously testing differences between location parameters of successive exponential populations, closed testing procedures are proposed separately for the following cases (i) when scale parameters are unknown and equal and (ii) when scale parameters are unknown and unequal. Critical constants required for the proposed procedures are obtained numerically and the selected values of the critical constants are tabulated. Simulation study revealed that the proposed procedures has better ability to detect the significant differences and has more power in comparison to exiting procedures. The illustration of the proposed procedures is given using real data.  相似文献   

5.
For series systems with k components it is assumed that the cause of failure is known to belong to one of the 2k − 1 possible subsets of the failure-modes. The theoretical time to failure due to k causes are assumed to have independent Weibull distributions with equal shape parameters. After finding the MLEs and the observed information matrix of (λ1, …, λk, β), a prior distribution is proposed for (λ1, …, λk), which is shown to yield a scale-invariant noninformative prior as well. No particular structure is imposed on the prior of β. Methods to obtain the marginal posterior distributions of the parameters and other parametric functions of interest and their Bayesian point and interval estimates are discussed. The developed techniques are illustrated using a numerical example.  相似文献   

6.
The inverse Gaussian family (IG) (μ,λ) is a versatile family for modelling nonnegative right-skewed data. In this paper, we propose robust methods for testing homogeneity of the scale-like parameters λi from k independent IG populations subject to order restrictions. Robustness of the procedures is examined for a variety of IG-symmetric alternatives including lognormal and the recently introduced contaminated inverse Gaussian populations. Our study shows that these inference procedures for the inverse Gaussian scale-like parameters and their properties exhibit striking similarities to those of the scale parameters of the normal distribution.  相似文献   

7.
In many clinical applications, understanding when measurement of new markers is necessary to provide added accuracy to existing prediction tools could lead to more cost effective disease management. Many statistical tools for evaluating the incremental value (IncV) of the novel markers over the routine clinical risk factors have been developed in recent years. However, most existing literature focuses primarily on global assessment. Since the IncVs of new markers often vary across subgroups, it would be of great interest to identify subgroups for which the new markers are most/least useful in improving risk prediction. In this paper we provide novel statistical procedures for systematically identifying potential traditional-marker based subgroups in whom it might be beneficial to apply a new model with measurements of both the novel and traditional markers. We consider various conditional time-dependent accuracy parameters for censored failure time outcome to assess the subgroup-specific IncVs. We provide non-parametric kernel-based estimation procedures to calculate the proposed parameters. Simultaneous interval estimation procedures are provided to account for sampling variation and adjust for multiple testing. Simulation studies suggest that our proposed procedures work well in finite samples. The proposed procedures are applied to the Framingham Offspring Study to examine the added value of an inflammation marker, C-reactive protein, on top of the traditional Framingham risk score for predicting 10-year risk of cardiovascular disease.  相似文献   

8.
This paper considers p-value based step-wise rejection procedures for testing multiple hypotheses. The existing procedures have used constants as critical values at all steps. With the intention of incorporating the exact magnitude of the p-values at the earlier steps into the decisions at the later steps, this paper applies a different strategy that the critical values at the later steps are determined as functions of the p-values from the earlier steps. As a result, we have derived a new equality and developed a two-step rejection procedure following that. The new procedure is a short-cut of a step-up procedure, and it possesses great simplicity. In terms of power, the proposed procedure is generally comparable to the existing ones and exceptionally superior when the largest p-value is anticipated to be less than 0.5.  相似文献   

9.
We introduce scaled density models for binary response data which can be much more reasonable than the traditional binary response models for particular types of binary response data. We show the maximum-likelihood estimates for the new models and it seems that the model works well with some sets of data. We also considered optimum designs for parameter estimation for the models and found that the D- and Ds-optimum designs are independent of parameters corresponding to the linear function of dose level, but the optimum designs are simple functions of a scale parameter only.  相似文献   

10.
Consider a linear regression model with regression parameter β=(β1,…,βp) and independent normal errors. Suppose the parameter of interest is θ=aTβ, where a is specified. Define the s-dimensional parameter vector τ=CTβt, where C and t are specified. Suppose that we carry out a preliminary F test of the null hypothesis H0:τ=0 against the alternative hypothesis H1:τ≠0. It is common statistical practice to then construct a confidence interval for θ with nominal coverage 1−α, using the same data, based on the assumption that the selected model had been given to us a priori (as the true model). We call this the naive 1−α confidence interval for θ. This assumption is false and it may lead to this confidence interval having minimum coverage probability far below 1−α, making it completely inadequate. We provide a new elegant method for computing the minimum coverage probability of this naive confidence interval, that works well irrespective of how large s is. A very important practical application of this method is to the analysis of covariance. In this context, τ can be defined so that H0 expresses the hypothesis of “parallelism”. Applied statisticians commonly recommend carrying out a preliminary F test of this hypothesis. We illustrate the application of our method with a real-life analysis of covariance data set and a preliminary F test for “parallelism”. We show that the naive 0.95 confidence interval has minimum coverage probability 0.0846, showing that it is completely inadequate.  相似文献   

11.
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m 0) to overcome this problem. Since m 0 is unknown, estimators of m 0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m 0, which is shown to overestimate m 0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m 0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings.  相似文献   

12.
The general beta distribution in 4 parameters, denoted gbeta (α, β, c, d), defined on the interval (c, d) is one of the most useful distributions in statistical applications and in related fields such as Operations Research and Management Science. We give here the exact expressions of the densities of X1X2 and of X1/X2, where X1 and X2 are independent general beta variables, and discuss the relationships between the new results with those already obtained. Applications in Reliability and Bayesian Quality Control are given. A computer program, developed for hypergeometric functions in general can be used to carry out the precise computation of these densities. Received: June 21, 2000; revised version: February 27, 2001  相似文献   

13.
In this paper we consider the long-run availability of a parallel system having several independent renewable components with exponentially distributed failure and repair times. We are interested in testing availability of the system or constructing a lower confidence bound for the availability by using component test data. For this problem, there is no exact test or confidence bound available and only approximate methods are available in the literature. Using the generalized p-value approach, an exact test and a generalized confidence interval are given. An example is given to illustrate the proposed procedures. A simulation study is given to demonstrate their advantages over the other available approximate procedures. Based on type I and type II error rates, the simulation study shows that the generalized procedures outperform the other available methods.  相似文献   

14.
The existence of difference matrices over small cyclic groups is investigated in this computer-aided work. The maximum values of the parameters for which difference matrices exist as well as the number of inequivalent difference matrices in each case is determined up to the computational limit. Several new difference matrices have been found in this manner. The maximum number of rows is 9 for an r ×15 difference matrix over Z3, 8 for an r ×15 difference matrix over Z5, and 6 for an r ×12 difference matrix over Z6; the number of inequivalent matrices with these parameters is 5, 2, and 7, respectively.  相似文献   

15.
Efficient stochastic algorithms are presented in order to simulate allele configurations distributed according to a family π A , 0<A<∞, of exchangeable sampling distributions arising in population genetics. Each distribution π A has two parameters n and k, the sample size and the number of alleles, respectively. For A→0, the distribution π A is induced from neutral sampling, whereas for A→∞, it is induced from Maxwell–Boltzmann sampling. Three different Monte Carlo methods (independent sampling procedures) are provided, based on conditioning, sequential methods and a generalization of Pitmans ‘Chinese restaurant process’. Moreover, an efficient Markov chain Monte Carlo method is provided. The algorithms are applied to the homozygosity test and to the Ewens–Watterson–Slatkin test in order to test the hypothesis of selective neutrality.  相似文献   

16.
A sample of n subjects is observed in each of two states, S1-and S2. In each state, a subject is in one of two conditions, X or Y. Thus, a subject may be recorded as showing a change if its condition in the two states is ‘Y,X’ or ‘X,Y’ and, otherwise, the condition is unchanged. We consider a Bayesian test of the null hypothesis that the probability of an ‘X,Y’ change exceeds that of a ‘Y,X’ change by amount kO. That is, we develop the posterior distribution of kO, the difference between the two probabilities and reject the null hypothesis if k lies outside the appropriate posterior probability interval. The performance of the method is assessed by Monte Carlo and other numerical studies and brief tables of exact critical values are presented  相似文献   

17.
18.
We consider a 2r factorial experiment with at least two replicates. Our aim is to find a confidence interval for θ, a specified linear combination of the regression parameters (for the model written as a regression, with factor levels coded as ?1 and 1). We suppose that preliminary hypothesis tests are carried out sequentially, beginning with the rth‐order interaction. After these preliminary hypothesis tests, a confidence interval for θ with nominal coverage 1 ?α is constructed under the assumption that the selected model had been given to us a priori. We describe a new efficient Monte Carlo method, which employs conditioning for variance reduction, for estimating the minimum coverage probability of the resulting confidence interval. The application of this method is demonstrated in the context of a 23 factorial experiment with two replicates and a particular contrast θ of interest. The preliminary hypothesis tests consist of the following two‐step procedure. We first test the null hypothesis that the third‐order interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero and proceed to the second step; otherwise, we stop. In the second step, for each of the second‐order interactions we test the null hypothesis that the interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero. The resulting confidence interval, with nominal coverage probability 0.95, has a minimum coverage probability that is, to a good approximation, 0.464. This shows that this confidence interval is completely inadequate.  相似文献   

19.
Let π01,…,πk be k+1 independent populations. For i=0,1,…,ki has the densit f(xi), where the (unknown) parameter θi belongs to an interval of the real line. Our goal is to select from π1,… πk (experimental treatments) those populations, if any, that are better (suitably defined) than π0 which is the control population. A locally optimal rule is derived in the class of rules for which Pr(πi is selected)γi, i=1,…,k, when θ01=?=θk. The criterion used for local optimality amounts to maximizing the efficiency in a certain sense of the rule in picking out the superior populations for specific configurations of θ=(θ0,…,θk) in a neighborhood of an equiparameter configuration. The general result is then applied to the following special cases: (a) normal means comparison — common known variance, (b) normal means comparison — common unknown variance, (c) gamma scale parameters comparison — known (unequal) shape parameters, and (d) comparison of regression slopes. In all these cases, the rule is obtained based on samples of unequal sizes.  相似文献   

20.
We propose a new adaptive L1 penalized quantile regression estimator for high-dimensional sparse regression models with heterogeneous error sequences. We show that under weaker conditions compared with alternative procedures, the adaptive L1 quantile regression selects the true underlying model with probability converging to one, and the unique estimates of nonzero coefficients it provides have the same asymptotic normal distribution as the quantile estimator which uses only the covariates with non-zero impact on the response. Thus, the adaptive L1 quantile regression enjoys oracle properties. We propose a completely data driven choice of the penalty level λnλn, which ensures good performance of the adaptive L1 quantile regression. Extensive Monte Carlo simulation studies have been conducted to demonstrate the finite sample performance of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号