首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A new series of multi-factor balanced block designs is introduced. Each of these designs has the following properties: (i) each of its k– 1 treatment factors is disposed in a cyclic or multi-cyclic balanced incomplete block design with parameters (v,b,r,k,Λ) = (a(k-l) + 1,a2(k-1) +a, ak, k, k) (a > 1); (ii) the incidence of any one of the treatment factors on any other is balanced; and (iii) after adjustment for blocks only, the relationship between any two of the treatment factors is that of adjusted orthogonality. The treatment factors are thus orthogonal to one another in the within-blocks stratum of the analysis of variance. The designs provide a benchmark with which other designs may be compared.  相似文献   

2.
Abstract

Personalized medicine asks if a new treatment will help a particular patient, rather than if it improves the average response in a population. Without a causal model to distinguish these questions, interpretational mistakes arise. These mistakes are seen in an article by Demidenko that recommends the “D-value,” which is the probability that a randomly chosen person from the new-treatment group has a higher value for the outcome than a randomly chosen person from the control-treatment group. The abstract states “The D-value has a clear interpretation as the proportion of patients who get worse after the treatment” with similar assertions appearing later. We show these statements are incorrect because they require assumptions about the potential outcomes which are neither testable in randomized experiments nor plausible in general. The D-value will not equal the proportion of patients who get worse after treatment if (as expected) those outcomes are correlated. Independence of potential outcomes is unrealistic and eliminates any personalized treatment effects; with dependence, the D-value can even imply treatment is better than control even though most patients are harmed by the treatment. Thus, D-values are misleading for personalized medicine. To prevent misunderstandings, we advise incorporating causal models into basic statistics education.  相似文献   

3.
When examining the effect of treatment A versus B, there may be a choice between a parallel group design, an AA/BB design, an AB/BA cross‐over and Balaam's design. In case of a linear mixed effects regression, it is examined, starting from a flexible function of the costs involved and allowing for subject dropout, which design is most efficient in estimating this effect. For no carry‐over, the AB/BA cross‐over design is most efficient as long as the dropout rate at the second measurement does not exceed /(1 + ρ), ρ being the intraclass correlation. For steady‐state carry‐over, depending on the costs involved, the dropout rate and ρ, either a parallel design or an AA/BB design is most efficient. For types of carry‐over that allow for self carry‐over, interest is in the direct treatment effect plus the self carry‐over effect, with either an AA/BB or Balaam's design being most efficient. In case of insufficient knowledge on the dropout rate or ρ, a maximin strategy is devised: choose the design that minimizes the maximum variance of the treatment estimator. Such maximin designs are derived for each type of carry‐over. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
We consider multiple comparison test procedures among treatment effects in a randomized block design. We propose closed testing procedures based on maximum values of some two-sample t test statistics and based on F test statistics. It is shown that the proposed procedures are more powerful than single-step procedures and the REGW (Ryan/Einot–Gabriel/Welsch)-type tests. Next, we consider the randomized block design under simple ordered restrictions of treatment effects. We propose closed testing procedures based on maximum values of two-sample one-sided t test statistics and based on Batholomew’s statistics for all pairwise comparisons of treatment effects. Although single-step multiple comparison procedures are utilized in general, the power of these procedures is low for a large number of groups. The closed testing procedures stated in the present article are more powerful than the single-step procedures. Simulation studies are performed under the null hypothesis and some alternative hypotheses. In this studies, the proposed procedures show a good performance.  相似文献   

5.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   

6.
Block designs are widely used in experimental situations where the experimental units are heterogeneous. The blocked general minimum lower order confounding (B-GMC) criterion is suitable for selecting optimal block designs when the experimenters have some prior information on the importance of ordering of the treatment factors. This paper constructs B-GMC 2n ? m: 2r designs with 5 × 2l/16 + 1 ? n ? (N ? 2l) < 2l ? 1 for l(r + 1 ? l ? n ? m ? 1), where 2n ? m: 2r denotes a two-level regular block design with N = 2n ? m runs, n treatment factors, and 2r blocks. With suitable choice of the blocking factors, each B-GMC block design has a common specific structure. Some examples illustrate the simple and effective construction method.  相似文献   

7.
A generalization of step-up and step-down multiple test procedures is proposed. This step-up-down procedure is useful when the objective is to reject a specified minimum number, q, out of a family of k hypotheses. If this basic objective is met at the first step, then it proceeds in a step-down manner to see if more than q hypotheses can be rejected. Otherwise it proceeds in a step-up manner to see if some number less than q hypotheses can be rejected. The usual step-down procedure is the special case where q = 1, and the usual step-up procedure is the special case where q = k. Analytical and numerical comparisons between the powers of the step-up-down procedures with different choices of q are made to see how these powers depend on the actual number of false hypotheses. Examples of application include comparing the efficacy of a treatment to a control for multiple endpoints and testing the sensitivity of a clinical trial for comparing the efficacy of a new treatment with a set of standard treatments.  相似文献   

8.
ABSTRACT

Neighbor designs are recommended for the cases where the performance of treatment is affected by the neighboring treatments as in biometrics and agriculture. In this paper we have constructed two new series of non binary partially neighbor balanced designs for v = 2n and v = 2n+1 number of treatments, respectively. The blocks in the design are non binary and circular but no treatment is ever a neighbor to itself. The designs proposed here are partially balanced in terms of nearest neighbors. No such series are known in the literature.  相似文献   

9.
Suppose there are k 1 (k 1 ≥ 1) test treatments that we wish to compare with k 2 (k 2 ≥ 1) control treatments. Assume that the observations from the ith test treatment and the jth control treatment follow a two-parameter exponential distribution and , where θ is a common scale parameter and and are the location parameters of the ith test and the jth control treatment, respectively, i = 1, . . . ,k 1; j = 1, . . . ,k 2. In this paper, simultaneous one-sided and two-sided confidence intervals are proposed for all k 1 k 2 differences between the test treatment location and control treatment location parameters, namely , and the required critical points are provided. Discussions of multiple comparisons of all test treatments with the best control treatment and an optimal sample size allocation are given. Finally, it is shown that the critical points obtained can be used to construct simultaneous confidence intervals for Pareto distribution location parameters.  相似文献   

10.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

11.
A method of calculating simultaneous one-sided confidence intervals for all ordered pairwise differences of the treatment effectsji, 1 i < j k, in a one-way model without any distributional assumptions is discussed. When it is known a priori that the treatment effects satisfy the simple ordering1k, these simultaneous confidence intervals offer the experimenter a simple way of determining which treatment effects may be declared to be unequal, and is more powerful than the usual two-sided Steel-Dwass procedure. Some exact critical points required by the confidence intervals are presented for k= 3 and small sample sizes, and other methods of critical point determination such as asymptotic approximation and simulation are discussed.  相似文献   

12.
This note discusses the effect of autocorrelated distrubances when they are not modelled on the statistics used in drawing inferences in the multiple linear regression model. It derives biases for the F and R2 statistics and evaluates them numerically for an example. The note concludes with a few brief reflections for empirical research on the causes, detection and treatment of autocorrelation.  相似文献   

13.
The paper compares several methods for computing robust 1-α confidence intervals for σ 1 2-σ 2 2, or σ 1 2/σ 2 2, where σ 1 2 and σ 2 2 are the population variances corresponding to two independent treatment groups. The emphasis is on a Box-Scheffe approach when distributions have different shapes, and so the results reported here have implications about comparing means. The main result is that for unequal sample sizes, a Box-Scheffe approach can be considerably less robust than indicated by past investigations. Several other procedures for comparing variances, not based on a Box-Scheffe approach, were also examined and found to be highly unsatisfactory although previously published papers found them to be robust when the distributions have identical shapes. Included is a new result on why the procedures examined here are not robust, and an illustration that increasing σ 1 2-σ 2 2 can reduce power in certain situations. Constants needed to apply Dunnett’s robust comparison of means are included.  相似文献   

14.
Abstract

We study optimal block designs for comparing a set of test treatments with a control treatment. We provide the class of all E-optimal approximate block designs, which is characterized by simple linear constraints. Based on this characterization, we obtain a class of E-optimal exact designs for unequal block sizes. In the studied model, we provide a statistical interpretation for wide classes of E-optimal designs. Moreover, we show that all approximate A-optimal designs and a large class of A-optimal exact designs for treatment-control comparisons are also R-optimal. This reinforces the observation that A-optimal designs perform well even for rectangular confidence regions.  相似文献   

15.
In rare diseases, typically only a small number of patients are available for a randomized clinical trial. Nevertheless, it is not uncommon that more than one study is performed to evaluate a (new) treatment. Scarcity of available evidence makes it particularly valuable to pool the data in a meta-analysis. When the primary outcome is binary, the small sample sizes increase the chance of observing zero events. The frequentist random-effects model is known to induce bias and to result in improper interval estimation of the overall treatment effect in a meta-analysis with zero events. Bayesian hierarchical modeling could be a promising alternative. Bayesian models are known for being sensitive to the choice of prior distributions for between-study variance (heterogeneity) in sparse settings. In a rare disease setting, only limited data will be available to base the prior on, therefore, robustness of estimation is desirable. We performed an extensive and diverse simulation study, aiming to provide practitioners with advice on the choice of a sufficiently robust prior distribution shape for the heterogeneity parameter. Our results show that priors that place some concentrated mass on small τ values but do not restrict the density for example, the Uniform(−10, 10) heterogeneity prior on the log(τ2) scale, show robust 95% coverage combined with less overestimation of the overall treatment effect, across varying degrees of heterogeneity. We illustrate the results with meta-analyzes of a few small trials.  相似文献   

16.
Repeated Measurement Designs, with two treatments, n (experimental) units and p periods are examined, the two treatments are denoted A and B. The model with independent observations within and between treatment sequences is used. Optimal designs are derived for: (i) the difference of direct treatment effects and the difference of residual effects, (ii) the difference of direct treatment effects, and (iii) the difference of residual effects. We prove that for three periods when n is odd the optimal design in the three cases (i), (ii), and (iii) is determined by taking the sequences BAA and ABB in numbers differing by one. If n is even, the optimal design in cases (i), (ii), and (iii) is again the same, by taking the sequences ABB and BAA in equal numbers. In case (i), for n even or odd, in the optimal design there is no correlation between the two estimated parameters. For n even, case (i) was solved by Cheng and Wu in 1980. The above imply that with two treatments in practice are preferable to use three periods instead of two.  相似文献   

17.
The generalized doubly robust estimator is proposed for estimating the average treatment effect (ATE) of multiple treatments based on the generalized propensity score (GPS). In medical researches where observational studies are conducted, estimations of ATEs are usually biased since the covariate distributions could be unbalanced among treatments. To overcome this problem, Imbens [The role of the propensity score in estimating dose-response functions, Biometrika 87 (2000), pp. 706–710] and Feng et al. [Generalized propensity score for estimating the average treatment effect of multiple treatments, Stat. Med. (2011), in press. Available at: http://onlinelibrary.wiley.com/doi/10.1002/sim.4168/abstract] proposed weighted estimators that are extensions of a ratio estimator based on GPS to estimate ATEs with multiple treatments. However, the ratio estimator always produces a larger empirical sample variance than the doubly robust estimator, which estimates an ATE between two treatments based on the estimated propensity score (PS). We conduct a simulation study to compare the performance of our proposed estimator with Imbens’ and Feng et al.’s estimators, and simulation results show that our proposed estimator outperforms their estimators in terms of bias, empirical sample variance and mean-squared error of the estimated ATEs.  相似文献   

18.
E. Csáki  I. Vincze 《Statistics》2013,47(4):531-548
Two test-statistics analogous to Pearson's chi-square test function - given in (1.6) and (1.7) - are investigated. These statistics utilize, apart from the number of sample elements lying in the respective intervals of the partition, their positions within the intervals too. It is shown that the test-statistics are asymptotically distributed - as the sample size N tends to infinity - according to the x 2distribution with parameter r, i.e. the number of intervals chosen. The limiting distribution of the test statistics under the null-hypothesis when N tends to the infinity and r =O(N α) (0<α<1), further the consistency of the tests based on these statistics is considered. Some remarks are made concerning the efficiency of the corresponding goodness of fit tests also; the authors intend to return to a more detailed treatment of the efficiency later.  相似文献   

19.
The authors propose nonparametric tests for the hypothesis of no direct treatment effects, as well as for the hypothesis of no carryover effects, for balanced crossover designs in which the number of treatments equals the number of periods p, where p ≥ 3. They suppose that the design consists of n replications of balanced crossover designs, each formed by m Latin squares of order p. Their tests are permutation tests which are based on the n vectors of least squares estimators of the parameters of interest obtained from the n replications of the experiment. They obtain both the exact and limiting distribution of the test statistics, and they show that the tests have, asymptotically, the same power as the F‐ratio test.  相似文献   

20.
S. Mejza 《Statistics》2013,47(3):335-341
In this paper the problem of combining the estimates is reexamined by making use of the theory of basic contrasts. For some basic contrasts, called partially confounded, a general method of finding uniformly better combined estimators of treatment contrast is derived, The method is applicable for all proper block designs, not necessarily connected, with equal or different treatment replications, for which there are multiple efficiency factors ?ε of multiplicity q> 2and if ν e > 2, where ν e is the number of the error degrees of freedom in the intra-block analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号