首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With linear dispersion effects, the standard factorial designs are not optimal estimation of a mean model. A sequential two-stage experimental design procedure has been proposed that first estimates the variance structure, and then uses the variance estimates and the variance optimality criterion to develop a second stage design that efficiency estimates the mean model. This procedure has been compared to an equal replicate design analyzed by ordinary least squares, and found to be a superior procedure in many situations.

However with small first stage sample sizes the variance estiamtes are not reliable, and hence an alternative procedure could be more beneficial. For this reason a Bayesian modification to the two-stage procedure is proposed which will combine the first stage variance estiamtes with some prior variance information that will produce a more efficient procedure. This Bayesian procedure will be compared to the non-Bayesian twostage procedure and to the two one-stage alternative procedures listed above. Finally, a recommendation will be made as to which procedure is preferred in certain situations.  相似文献   

2.
For testing the non-inferiority (or equivalence) of an experimental treatment to a standard treatment, the odds ratio (OR) of patient response rates has been recommended to measure the relative treatment efficacy. On the basis of an exact test procedure proposed elsewhere for a simple crossover design, we develop an exact sample-size calculation procedure with respect to the OR of patient response rates for a desired power of detecting non-inferiority at a given nominal type I error. We note that the sample size calculated for a desired power based on an asymptotic test procedure can be much smaller than that based on the exact test procedure under a given situation. We further discuss the advantage and disadvantage of sample-size calculation using the exact test and the asymptotic test procedures. We employ an example by studying two inhalation devices for asthmatics to illustrate the use of sample-size calculation procedure developed here.  相似文献   

3.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   

4.
Chakraborti and Desu (1988) presented a distribution-free procedure for testing that k (≥1) distributions are equal to a control distribution. They compared their procedure, a generalization of the test proposed by Mathisen (1943), to the procedure proposed by Slivka (1970). They asserted that their procedure has shorter expected duration than Slivka's procedure in life-testing experiments where observations become available in an ordered manner. Here it is proven that, in fact, Slivka's procedure has shorter duration in such circumstances. Normal approximations are presented which indicate that their procedure requires a smaller sample size to guarantee a specified power for Lehmann alternatives and proportional hazard alternatives when all observations are to be observed.  相似文献   

5.
Responses in a one-factor experiment with A; ordered treatments follow an umbrella, ordering if they consist of two piecewise monotone segments, i.e. increasing and then decreasing, or the converse. This paper proposes a non-parametric distribution-free confidence procedure for umbrella orderings, the aim being to identify the treatments that correspond to the optimal effects. It uses a method that joins the seemingly unrelated theories of U-statistics and isotonic regression. A random confidence subset of the ordered treatments is constructed, such that it contains all the unknown peaks (optimal treatments) of an umbrella ordering with any prespecified confidence level. The paper demonstrates that the proposed confidence procedure is nonparametric distribution-free and, further, that the proposed procedure naturally implies a test for umbrella alternatives. Since the proposed confidence procedure is always more informative than tests for umbrella alternatives, it should be used in their place in practice. An example illustrates the proposed procedure.  相似文献   

6.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This paper deals with an asymptotic distribution-free subset selection procedure for a two-way layout problem. The treatment effect with the largest unknown value is of interest to us. The block effect is a nuisance parameter in this problem. The proposed procedure is based on the Hodges-Lehmann estimators of location parameters. The asymptotic relative efficiency of the proposed procedure with the normal means procedure is evaluated. It is shown that the proposed procedure has a high efficiency.  相似文献   

8.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   

9.
A subset selection procedure is developed for selecting a subset containing the multinomial population that has the highest value of a certain linear combination of the multinomial cell probabilities; such population is called the ‘best’. The multivariate normal large sample approximation to the multinomial distribution is used to derive expressions for the probability of a correct selection, and for the threshold constant involved in the procedure. The procedure guarantees that the probability of a correct selection is at least at a pre-assigned level. The proposed procedure is an extension of Gupta and Sobel's [14] selection procedure for binomials and of Bakir's [2] restrictive selection procedure for multinomials. One illustration of the procedure concerns population income mobility in four countries: Peru, Russia, South Africa and the USA. Analysis indicates that Russia and Peru fall in the selected subset containing the best population with respect to income mobility from poverty to a higher-income status. The procedure is also applied to data concerning grade distribution for students in a certain freshman class.  相似文献   

10.
This article deals with the Granger non causality test in cointegrated vector autoregressive processes. We propose a new testing procedure that yields an asymptotically standard distribution and performs well in small samples by combining the standard Wald test and the generalized inverse procedure. We also propose a few simple modifications to the test statistics in order to help our procedure perform better in finite samples. Monte Carlo simulations show that our procedure works better than the conventional approach.  相似文献   

11.
A procedure for selecting a subset of predictor variables in regression analysis is suggested. The procedure is so designed that it leads to the selection of a subset of variables having an adequate degree of informativeness with a directly specified confidence coefficient. Some examples are considered to illustrate the application of the procedure.  相似文献   

12.
In many scientific fields, it is interesting and important to determine whether an observed data stream comes from a prespecified model or not, particularly when the number of data streams is of large scale, where multiple hypotheses testing is necessary. In this article, we consider large-scale model checking under certain dependence among different data streams observed at the same time. We propose a false discovery rate (FDR) control procedure to check those unusual data streams. Specifically, we derive an approximation of false discovery and construct a point estimate of FDR. Theoretical results show that, under some mild assumptions, our proposed estimate of FDR is simultaneously conservatively consistent with the true FDR, and hence it is an asymptotically strong control procedure. Simulation comparisons with some competing procedures show that our proposed FDR procedure behaves better in general settings. Application of our proposed FDR procedure is illustrated by the StarPlus fMRI data.  相似文献   

13.
Gu MG  Sun L  Zuo G 《Lifetime data analysis》2005,11(4):473-488
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.  相似文献   

14.
Abstract

We consider the classification of high-dimensional data under the strongly spiked eigenvalue (SSE) model. We create a new classification procedure on the basis of the high-dimensional eigenstructure in high-dimension, low-sample-size context. We propose a distance-based classification procedure by using a data transformation. We also prove that our proposed classification procedure has consistency property for misclassification rates. We discuss performances of our classification procedure in simulations and real data analyses using microarray data sets.  相似文献   

15.
In classification analysis, the target variable is often in practice defined by an underlying multivariate interval screening scheme. This engenders the problem of properly characterizing the screened populations as well as that of obtaining a classification procedure. Such problems paved the way for the development of yet another linear classification procedure and the incorporation of a class of skew-elliptical distributions for describing evolutions in the populations. To render the linear procedure effective, this article considers derivation and properties of the classification procedure as well as efficient estimation. The procedure is illustrated in applications to real and simulation data.  相似文献   

16.
The purpose of toxicological studies is a safety assessment of compounds (e.g. pesticides, pharmaceuticals, industrial chemicals and food additives) at various dose levels. Because a mistaken declaration that a really non-equivalent dose is equivalent could have dangerous consequences, it is important to adopt reliable statistical methods that can properly control the family-wise error rate. We propose a new stepwise confidence interval procedure for toxicological evaluation based on an asymmetric loss function. The new procedure is shown to be reliable in the sense that the corresponding family-wise error rate is well controlled at or below the pre-specified nominal level. Our simulation results show that the new procedure is to be preferred over the classical confidence interval procedure and the stepwise procedure based on Welch's approximation in terms of practical equivalence/safety. The implementation and significance of the new procedure are illustrated with two real data sets: one from a reproductive toxicological study on Nitrofurazone in Swiss CD-1 mice, and the other from a toxicological study on Aconiazide.  相似文献   

17.
T max and C max are important pharmacokinetic parameters in drug development processes. Often a nonparametric procedure is needed to estimate them when model independence is required. This paper proposes a simulation-based optimal design procedure for finding optimal sampling times for nonparametric estimates of T max and C max for each subject, assuming that the drug concentration follows a non-linear mixed model. The main difficulty of using standard optimal design procedures is that the property of the nonparametric estimate is very complicated. This procedure uses a sample reuse simulation to calculate the design criterion, which is an integral of multiple dimension, so that effective optimization procedures such as Newton-type procedures can be used directly to find optimal designs. This procedure is used to construct optimal designs for an open one-compartment model. An approximation based on the Taylor expansion is also derived and showed results that were consistent with those based on the sample reuse simulation.  相似文献   

18.
This article proposes a new procedure for obtaining one-sided tolerance limits in unbalanced random effects models. The procedure is a generalization of that proposed by Mee and Owen for the balanced situation, and can be easily implemented, because it only needs a non-central-t table. Two simulation studies are carried out to assess the performance of the new procedure and to compare it with one of the other procedures laid out in previous statistical literature. The article findings show that the new procedure is much simpler to compute and performs better than the previous ones, having inferior values of the gamma bias in a wide range of situations, representative of many actual industrial applications, and behaving also reasonably well in more extreme sampling situations. The use of the new limits is illustrated by an application to an actual example from the steel industry.  相似文献   

19.
Zhang C  Fan J  Yu T 《Annals of statistics》2011,39(1):613-642
The multiple testing procedure plays an important role in detecting the presence of spatial signals for large scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR procedure can lack the ability to detect statistical significance, even if the p-values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR procedure. This paper first introduces a scalar quantity to characterize the extent to which the "lack of identification phenomenon" (LIP) of the FDR procedure occurs. Second, we propose a new multiple comparison procedure, called FDR(L), to accommodate the spatial information of neighboring p-values, via a local aggregation of p-values. Theoretical properties of the FDR(L) procedure are investigated under weak dependence of p-values. It is shown that the FDR(L) procedure alleviates the LIP of the FDR procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDR(L) procedure improves the detection sensitivity of the FDR procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDR(L) procedure are illustrated through a real brain fMRI dataset.  相似文献   

20.
An algorithm is presented for computing an exact nonparametric interval estimate of the slope parameter in a simple linear regression model. The confidence interval is obtained by inverting the hypothesis test for slope that uses Spearman's rho. This method is compared to an exact procedure based on Kendall's tau. The Spearman rho procedure will generally give exact levels of confidence closer to desired levels, especially in small samples. Monte carlo results comparing these two methods with the parametric procedure are given  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号