首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

2.
Many split-plot×split-block (SPSB) type experiments used in agriculture, biochemistry or plant protection are designed to study new crop plant cultivars or chemical agents. In these experiments it is usually very important to compare test treatments with the so-called control treatments. It happens yet that experimental material is limited and it does not allow using a complete (orthogonal) SPSB design. In the paper we propose a non-orthogonal SPSB design for consideration. Two cases of the design are presented here, i.e. when its incompleteness is connected with a crossed treatment structure only or with a nested treatment structure only. It is assumed the factors' levels connected with the incompleteness of the design are split into two groups: a set of test treatments and a set of control treatments. The method of constructions involves applying augmented block designs for some factors' levels. In a modelling data obtained from such experiments the structure of experimental material and appropriate randomization scheme of the different kinds of units before they enter the experiment are taken into account. With respect to the analysis of the obtained randomization model the approach typical to the multistratum experiments with orthogonal block structure is adapted. The proposed statistical analysis of linear model obtained includes estimation of parameters, testing general and particular hypotheses defined by the (basic) treatment contrasts with special reference to the notion of general balance.  相似文献   

3.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

4.
In this article, we take up the experimental situation of a heteroscedastic one-way layout model in the presence of a set of controllable covariates. For the joint estimation of the elementary contrasts of a set of test treatments with a control and the effects of covariates, sufficient conditions for the existence of an A-optimal design are identified. When these sufficient conditions are not met, we propose highly A-efficient designs. The methods of construction of A-optimal and highly A-efficient designs are discussed. For different values of the parameters of the design, A-efficiency of the proposed designs are tabulated for a comparative study.  相似文献   

5.
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.  相似文献   

6.
In this paper a new class of designs involving sequences of treatments balanced for first residual effects has been introduced. These designs require only t experimental units for 2t periods, t being the number of treatments to be tested. A unified method of constructing these designs for all values of t (≥2) along with an appropriate method of analysis is presented. Besides, their efficiency relative to some well known designs is investigated.  相似文献   

7.
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.  相似文献   

8.
In biological experiments, multiple comparison test procedures may lead to a statistically significant difference in means. However, sometimes the difference is not worthy of attention considering the inherent variation in the characteristic. This may be due to the fact that the magnitude of the change in the characteristic under study after receiving the treatment is small, less than the natural biological variation. It then becomes the job of the statistician to design a test that will remove this paradox, such that the statistical significance will coincide with the biological one. The present paper develops a multiple comparison test for comparing two treatments with control by incorporating within-person variation in forming interval hypotheses. Assuming common variance (unknown) for the three groups (control and two treatments) and the width of the interval as intra-individual variation (known), the distribution of the test statistic is obtained as bivariate non-central t . A level f test procedure is designed. A table of critical values for carrying out the test is constructed for f = 0.05. The exact powers are computed for various values of small sample sizes and parameters. The test is powerful for all values of the parameters. The test was used to detect differences in zinc absorption for two cereal diets compared with a control diet. After application of our test, we arrived at the conclusion of homogeneity of diets with the control diet. Dunnett's procedure, when applied to the same data, concluded otherwise. The new test can also be applied to other data situations in biology, medicine and agriculture.  相似文献   

9.
In this paper, a one-stage multiple comparison procedures with the average for exponential location parameters based on the doubly censored sample under heteroscedasticity is proposed. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, emerging market, pharmaceutical industries. The critical values are tabulated in a table for practical use. A simulation study on the confidence length and coverage probabilities is done. At last, an example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed procedures.  相似文献   

10.
In this article, a multiple three-decision procedure is proposed to classify p (≥2) treatments as better or worse than the best of q (≥2) control treatments in one way layout. Critical constants required for the implementation of the proposed procedure are tabulated for some pre-specified values of probability of no misclassification. Power function of the proposed procedure is defined and a common sample size necessary to guarantee various pre-specified power levels are tabulated under two optimal allocation schemes. Finally the implementation of the proposed methodology is demonstrated through numerical examples based on real life data.  相似文献   

11.
Experimental designs in which treatments are applied to the experimental units, one at a time, in sequences over a number of periods, have been used in several scientific investigations and are known as repeated measurements designs. Besides direct effects, these designs allow estimation of residual effects of treatments along with adjustment for them. Assuming the existence of first-order residual effects of treatments, Hedayat & Afsarinejad (1975) gave a method of constructing minimal balanced repeated measurements [RM(v,n,p)] design for v treatments using n=2v experimental units for p [=(v+1)/2] periods when v is a prime or prime power. Here, a general method of construction of these designs for all odd v has been given along with an outline for their analysis. In terms of variances of estimated elementary contrasts between treatment effects (direct and residual), these designs are seen to be partially variance balanced based on the circular association scheme.  相似文献   

12.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

13.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

14.
In this article, an extensive Monte Carlo simulation study is conducted to evaluate and compare nonparametric multiple comparison tests under violations of classical analysis of variance assumptions. Simulation space of the Monte Carlo study is composed of 288 different combinations of balanced and unbalanced sample sizes, number of groups, treatment effects, various levels of heterogeneity of variances, dependence between subgroup levels, and skewed error distributions under the single factor experimental design. By this large simulation space, we present a detailed analysis of effects of the violations of assumptions on the performance of nonparametric multiple comparison tests in terms of three error and four power measures. Observations of this study are beneficial to decide the optimal nonparametric test according to requirements and conditions of undertaken experiments. When some of the assumptions of analysis of variance are violated and number of groups is small, use of stepwise Steel-Dwass procedure with Holm's approach is appropriate to control type I error at a desired level. Dunn's method should be employed for greater number of groups. When subgroups are unbalanced and number of groups is small, Nemenyi's procedure with Duncan's approach produces high power values. Conover's procedure successfully provides high power values with a small number of unbalanced groups or with a greater number of balanced or unbalanced groups. At the same time, Conover's procedure is unable to control type I error rates.  相似文献   

15.
When testing treatment effects in multi‐arm clinical trials, the Bonferroni method or the method of Simes 1986) is used to adjust for the multiple comparisons. When control of the family‐wise error rate is required, these methods are combined with the close testing principle of Marcus et al. (1976). Under weak assumptions, the resulting p‐values all give rise to valid tests provided that the basic test used for each treatment is valid. However, standard tests can be far from valid, especially when the endpoint is binary and when sample sizes are unbalanced, as is common in multi‐arm clinical trials. This paper looks at the relationship between size deviations of the component test and size deviations of the multiple comparison test. The conclusion is that multiple comparison tests are as imperfect as the basic tests at nominal size α/m where m is the number of treatments. This, admittedly not unexpected, conclusion implies that these methods should only be used when the component test is very accurate at small nominal sizes. For binary end‐points, this suggests use of the parametric bootstrap test. All these conclusions are supported by a detailed numerical study.  相似文献   

16.
The experimental design literature has produced a wide range of algorithms optimizing estimator variance for linear models where the design-space is finite or a convex polytope. But these methods have problems handling nonlinear constraints or constraints over multiple treatments. This paper presents Newton-type algorithms to compute exact optimal designs in models with continuous and/or discrete regressors, where the set of feasible treatments is defined by nonlinear constraints. We carry out numerical comparisons with other state-of-art methods to show the performance of this approach.  相似文献   

17.
In this article, two-stage procedures for multiple comparisons with the average for location parameters of two-parameter exponential distributions under heterocedasity including one- and two-sided confidence intervals are proposed. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, stock market, medical research, and auto models. An upper limit of critical values are obtained using the recent techniques given in Lam (Proceedings of the Second International Advanced Seminar/Workshop on Inference Procedures Associated with Statistical Ranking and Selection, Sydney, Australia, August 1987; Comm. Statist. Simulation Comput. B17(3) (1988) 55). These approximate critical values are shown to have better results than the approximate critical values using the Bonferroni inequality developed in this paper. An example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed methodology.  相似文献   

18.
The analysis of clinical trials aiming to show symptomatic benefits is often complicated by the ethical requirement for rescue medication when the disease state of patients worsens. In type 2 diabetes trials, patients receive glucose‐lowering rescue medications continuously for the remaining trial duration, if one of several markers of glycemic control exceeds pre‐specified thresholds. This may mask differences in glycemic values between treatment groups, because it will occur more frequently in less effective treatment groups. Traditionally, the last pre‐rescue medication value was carried forward and analyzed as the end‐of‐trial value. The deficits of such simplistic single imputation approaches are increasingly recognized by regulatory authorities and trialists. We discuss alternative approaches and evaluate them through a simulation study. When the estimand of interest is the effect attributable to the treatments initially assigned at randomization, then our recommendation for estimation and hypothesis testing is to treat data after meeting rescue criteria as deterministically ‘missing’ at random, because initiation of rescue medication is determined by observed in‐trial values. An appropriate imputation of values after meeting rescue criteria is then possible either directly through multiple imputation or implicitly with a repeated measures model. Crucially, one needs to jointly impute or model all markers of glycemic control that can lead to the initiation of rescue medication. An alternative for hypothesis testing only are rank tests with outcomes from patients ‘requiring rescue medication’ ranked worst, and non‐rescued patients ranked according to final visit values. However, an appropriate ranking of not observed values may be controversial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The randomized complete block designs, RCBDs, are among the most popular of block designs for comparing a set of experimental treatments. The question of this design's effectiveness when one of the treatments is a control is examined here. Optimality ranges are established for the RBCD in terms of the strength of interest in control comparisons. It is found that if the control treatment is of secondary interest, the RCBD, when not best, is typically near best. This is not so when comparisons with the control are of greater interest than those among the other treatments.  相似文献   

20.
To increase the efficiency of comparisons between treatments in clinical trials, we may consider the use of a multiple matching design, in which, for each patient receiving the experimental treatment, we match with more than one patient receiving the standard treatment. To assess the efficacy of the experimental treatment, the risk ratio (RR) of patient responses between two treatments is certainly one of the most commonly used measures. Because the probability of patient responses in clinical trial is often not small, the odds ratio (OR), of which the practical interpretation is not easily understood, cannot approximate RR well. Thus, all sample size formulae in terms of OR for case-control studies with multiple matched controls per case can be of limited use here. In this paper, we develop three sample size formulae based on RR for randomized trials with multiple matching. We propose a test statistic for testing the equality of RR under multiple matching. On the basis of Monte Carlo simulation, we evaluate the performance of the proposed test statistic with respect to Type I error. To evaluate the accuracy and usefulness of the three sample size formulae developed in this paper, we further calculate their simulated powers and compare them with those of the sample size formula ignoring matching and the sample size formula based on OR for multiple matching published elsewhere. Finally, we include an example that employs the multiple matching study design about the use of the supplemental ascorbate in the supportive treatment of terminal cancer patients to illustrate the use of these formulae.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号