首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, the formula for the expected number of incorrect decisions has been obtained. It is assumed that the factors are defective with different a-priori probabilities. Group-screening designs have been described which minimise (i) the expected number of runs for a fixed value of the expected number of incorrect decisions, (ii) the expected total cost.  相似文献   

2.
The performance of step-wise group screening In terms of the expected number of runs and the expected number of incorrect decisions, Is considered. A method for obtaining optimal step-wise designs Is presented, for the cases in which the direction of each defective factor is assumed to be known a-priori and the observations are subject to error.  相似文献   

3.
This paper examines the performance of three-stage group screening in terms of the expected number of runs and the expected number of incorrect decisions. A few tables are given at the end to provide the reader with a guideline in choosing a proper screening strategy.  相似文献   

4.
Step-wise group screening experiments for classifying members of a population as either good or defective are generalised to more than two stages. An expression for the expected number of tests is obtained and optimum 2, 3 and 4-stage designs are tabulated. By assuming p to be small, an approximation to the expected number of tests is derived which is minimized with respect to the number of groups in each stage. Finally, a new bifurcation technique, seen to be a special case of multi-stage step-wise group screening, is discussed.  相似文献   

5.
In this article we discuss multistage group screening in which group-factors contain differing number of factors. We describe a procedure for grouping the factors in the absence of concrete prior information, so that the relative testing cost is minimal. It Is shown that under quite general conditions, these designs will require fewer runs than the equivalent designs in which the group-factors contain same number of factors.  相似文献   

6.
Group testing problems are considered as examples of discrete search problems. Existence theorems for optimal nonsequential designs developed for the general discrete search problems in O'Geran et al. (Acta Appl. Math. 25 (1991) 241–276) are applied for construction of upper bounds for the length of optimal group testing strategies in the case of additive model. The key point in the study is derivation of analytic expressions for the so-called Renyi coefficients. In addition, some asymptotic results are obtained and an asymptotic design problem is considered. The results particularly imply that if the number of significant factors is relatively small compared to the total number of factors then the choice of the test collections all containing a half of the total number of factors is asymptotically optimal in a proper sense.  相似文献   

7.
In this work a method is developed for determining the expected sample size, 2nN, required by a group sequential test using a Bayesian approach. This method is proved to be superior to some recently developed methods. It gives a specific technique to determine the maximum number of groups, N, as well as the group, size, 2n. The proposed method allows a very early termination of the experiment when the alternative hypothesis is true, i.e. when i there is real difference between the treatments under consideration.  相似文献   

8.
This paper examines the performance of three-stage group screening in terms of the mean number of tests needed; and the proportion of active factors correctly detected by the screening plan, A linear cost function is also proposed, To evaluate performance, random grouping and a constant signa1-to -noise ratio for all active factors, are assumed.  相似文献   

9.
We consider a wide range of combinatorial group testing problems with lies including binary, additive and multiaccess channel group testing problems. We derive upper bounds for the number of tests in the optimal nonadaptive algorithms. The derivation is probabilistic and is therefore non-constructive; it does not provide the way of constructing optimal algorithms. In the asymptotic setting, we show that the leading term for the number of tests does not depend on the number of lies and it is thus the same as for the zero-lie case. However, the other terms in the asymptotic upper bounds depend on the number of lies and substantially influence the upper bounds in the non-asymptotic situation.  相似文献   

10.
This article deals with multistage group screening in which group-factors contain the same number of factors. A usual assumption of this procedure is that the directions of possible effects are known. In practice, however, this assumption i s often unreasonable. This paper examines, in the case of no errors in observations, the performance of multistage group screening when this assumption is false . This enails consideration of cancellation effects within group-factors.  相似文献   

11.
The Benjamini-Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.  相似文献   

12.
The Benjamini–Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.  相似文献   

13.
A new procedure is introduced for conducting screening experiments to find a small number of influential factors from among a large number of factors with negligible effects. It is intended for experiments in which the factors are easily controlled, as in simulation models. It adds observations sequentially after conducting a small initial experiment. The performance of the procedure is investigated using simulation, and evidence is presented that this and other procedures scale as the logarithm of the total number of factors if the number of influential factors is fixed. An investigation of the new procedure for 1–3 active factors shows that it compares favorably with competing methods, particularly when the size of the nonzero effects is 1–2 times the standard deviation. A limited look at the procedure for up to 6 active factors is also presented.  相似文献   

14.
It is generally assumed that the defective factors of a population have the same a-priori probability of being defective. However,with some knowledge of the population, we can relax on this assumption and assume the population may consist of factors with unequal a-priori probabilities of being defective. Step-wise screening is developed to detect these factors with minimum expected number of runs assuming that there are no errors in the observationso Comparison is done with an equivalent two-stage group screening experiment.  相似文献   

15.
Non-parametric group sequential designs in randomized clinical trials   总被引:1,自引:0,他引:1  
This paper examines some non‐parametric group sequential designs applicable for randomized clinical trials, for comparing two continuous treatment effects taking the observations in matched pairs, or applicable in event‐based analysis. Two inverse binomial sampling schemes are considered, of which the second one is an adaptive data‐dependent design. These designs are compared with some fixed sample size competitors. Power and expected sample sizes are calculated for the proposed procedures.  相似文献   

16.
Variance dispersion graphs have become a popular tool in aiding the choice of a response surface design. Often differences in response from some particular point, such as the expected position of the optimum or standard operating conditions, are more important than the response itself. We describe two examples from food technology. In the first, an experiment was conducted to find the levels of three factors which optimized the yield of valuable products enzymatically synthesized from sugars and to discover how the yield changed as the levels of the factors were changed from the optimum. In the second example, an experiment was conducted on a mixing process for pastry dough to discover how three factors affected a number of properties of the pastry, with a view to using these factors to control the process. We introduce the difference variance dispersion graph (DVDG) to help in the choice of a design in these circumstances. The DVDG for blocked designs is developed and the examples are used to show how the DVDG can be used in practice. In both examples a design was chosen by using the DVDG, as well as other properties, and the experiments were conducted and produced results that were useful to the experimenters. In both cases the conclusions were drawn partly by comparing responses at different points on the response surface.  相似文献   

17.
We consider the problem of selecting variables in factor analysis models. The $L_1$ regularization procedure is introduced to perform an automatic variable selection. In the factor analysis model, each variable is controlled by multiple factors when there are more than one underlying factor. We treat parameters corresponding to the multiple factors as grouped parameters, and then apply the group lasso. Furthermore, the weight of the group lasso penalty is modified to obtain appropriate estimates and improve the performance of variable selection. Crucial issues in this modeling procedure include the selection of the number of factors and a regularization parameter. Choosing these parameters can be viewed as a model selection and evaluation problem. We derive a model selection criterion for evaluating the factor analysis model via the weighted group lasso. Monte Carlo simulations are conducted to investigate the effectiveness of the proposed procedure. A real data example is also given to illustrate our procedure. The Canadian Journal of Statistics 40: 345–361; 2012 © 2012 Statistical Society of Canada  相似文献   

18.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

19.
The problem of selecting the normal population with the largest population mean when the populations have a common known variance is considered. A two-stage procedure is proposed which guarantees the same probability requirement using the indifference-zone approach as does the single-stage procedure of Bechhofer (1954). The two-stage procedure has the highly desirable property that the expected total number of observations required by the procedure is always less than the total number of observations required by the corresponding single-stage procedure, regardless of the configuration of the population means. The saving in expected total number of observations can be substantial, particularly when the configuration of the population means is favorable to the experimenter. The saving is accomplished by screening out “non-contending” populations in the first stage, and concentrating sampling only on “contending” populations in the second stage.

The two-stage procedure can be regarded as a composite one which uses a screening subset-type approach (Gupta (1956), (1965)) in the first stage, and an indifference-zone approach (Bechhofer (1954)) applied to all populations retained in the selected sub-set in the second stage. Constants to implement the procedure for various k and P? are provided, as are calculations giving the saving in expected total sample size if the two-stage procedure is used in place of the corresponding single-stage procedure.  相似文献   

20.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号