首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The paper discusses the relationship between the gamma-distribution order statistics, the maximal cell frequency in multinomial trials and the distribution arising in a commonly used balanced randomization scheme with several treatments. The latter is used in clinical trials, load balancing in computer files storage, parapsychological experiments, etc., where a randomization design starts with the uniform probability assignment of subjects to treatments. The limiting distributions of the waiting time until a treatment receives the given number of subjects are described. The relationship to classical occupancy problems, in particular, to the Banach match-box problem and to the birthday problem is discussed.  相似文献   

2.
In comparing two treatments, suppose the suitable subjects arrive sequentially and must be treated at once. Known or unknown to the experimenter there may be nuisance factors systematically affecting the subjects. Accidental bias is a measure of the influence of these factors in the analysis of data. We show in this paper that the random allocation design minimizes the accidental bias among all designs that allocate n, out of 2n, subjects to each treatment and do not prefer either treatment in the assignment. When the final imbalance is allowed to be nonzero, optimal and efficient designs are given. In particular the random allocation design is shown to be very efficient in this broader setup.  相似文献   

3.
A method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous. Composite hypotheses about the difference in success probabilities are tested, and covariate information is utilized in the analysis. The method is based upon a generalization of Bartlett’s (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a Sequential Probability Ratio Test (SPRT). Treatment assignment rules studied include pure randomization, randomized blocks, and an adaptive rule which tends to assign the superior treatment to the majority of subjects. It is shown that the use of covariate information can result in important reductions in the expected sample size for specified error probabilities, and that the use of covariate information is essential for the elimination of bias when adaptive assignment rules are employed. Designs of the type presented are easily generated, as the termination criterion is the same as for a Wald SPRT of simple hypotheses.  相似文献   

4.
We present schemes for the allocation of subjects to treatment groups, in the presence of prognostic factors. The allocations are robust against incorrectly specified regression responses, and against possible heteroscedasticity. Assignment probabilities which minimize the asymptotic variance are obtained. Under certain conditions these are shown to be minimax (with respect to asymptotic mean squared error) as well. We propose a method of sequentially modifying the associated assignment rule, so as to address both variance and bias in finite samples. The resulting scheme is assessed in a simulation study. We find that, relative to common competitors, the robust allocation schemes can result in significant decreases in the mean squared error when the fitted models are biased, at a minimal cost in efficiency when in fact the fitted models are correct.  相似文献   

5.
This paper presents a new class of designs (Big Stick Designs) for sequentially assigning experimental units to treatments, when only the time covariate is considered. By prescribing the degree of imbalance which the experimenters can tolerate, complete randomization is used as long as the imbalance of the treatment allocation does not exceed the prescribed value. Once it reaches the value, a deterministic assignment is made to lower the imbalance. Such designs can be easily implemented with no programming and little personnel support. They compare favorably with the Biased Coin Designs, the Permuted Black Designs, and the Urn Designs, as far as the accidental bias and selection bias are concerned. Generalizations of these designs are considered to achieve various purposes, e.g., avoidance of deterministic assignments, early balance, etc.  相似文献   

6.
We consider a problem of estimating the minimum effective and peak doses in the presence of covariates. We propose a sequential strategy for subject assignment that includes an adaptive randomization component to balance the allocation to placebo and active doses with respect to covariates. We conclude that either adjusting for covariates in the model or balancing allocation with respect to covariates is required to avoid bias in the target dose estimation. We also compute optimal allocation to estimate the minimum effective and peak doses in discrete dose space using isotonic regression.  相似文献   

7.
In an experiment to compare K(<2) treatments, suppose that eligible subjects arrive at an experimental site sequentially and must be treated immediately. In this paper, we assume that the size of the experiment cannot be predetermined and propose and analyze a class of treatment assignment rules which offer compromises between the complete randomization and the perfect balance schemes, A special case of these assignment rules is thoroughly investigated and is featured in the numerical compu-tations. For practical use, a method of implementation of this special rule is provided.  相似文献   

8.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

9.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   

10.
Motivated by problems in linguistics we consider a multinomial random vector for which the number of cells N is not much smaller than the sum of the cell frequencies, i.e. the sample size n . The distribution function of the uniform distribution on the set of all cell probabilities multiplied by N is called the structural distribution function of the cell probabilities. Conditions are given that guarantee that the structural distribution function can be estimated consistently as n increases indefinitely although n / N does not. The natural estimator is inconsistent and we prove consistency of essentially two alternative estimators.  相似文献   

11.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
This continuing education course for professionals involved in all areas of clinical trials integrates concepts related to the role of randomization in the scientific process. The course includes two interactive lecture and discussion sections and a workshop practicum. The first interactive lecture introduces basic clinical trial issues and statistical principles such as bias, blinding, randomization, control groups, and the importance of formulating clear and discriminating clinical and statistical hypotheses. It then focuses on the most commonly used clinical study designs and the corresponding patient randomization schemes. The second interactive lecture focuses on the implementation of randomization of patients and drug supply through allocation and component ID schedules. The workshop practicum, conducted in small groups, enables students to apply the lecture concepts to real clinical studies. Flexibility was built into the workshop practicum materials to allow the course content to be customized to specific audiences, and the interactive lecture sessions can be stretched to cover more advanced topics according to class interest and time availability.  相似文献   

13.
Efficient numerical algorithms are developed to evaluate several probabilities related to multinomial trials.In the first part of the paper, the probability distribution of the number of trials until the alternatives j, j = 1,… m, have occurred at least ij times is computed. The multinomial trials involve the m alternatives l,…, m, with positive probabilities Pl-Pm of occurrence. In the second part, several aspects of a multinomial subset selection problem, discussed by S. S. Gupta and K. Nagel, are investigated.  相似文献   

14.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

15.
In drug development, treatments are most often selected at Phase 2 for further development when an initial trial of a new treatment produces a result that is considered positive. This selection due to a positive result means, however, that an estimator of the treatment effect, which does not take account of the selection is likely to over‐estimate the true treatment effect (ie, will be biased). This bias can be large and researchers may face a disappointingly lower estimated treatment effect in further trials. In this paper, we review a number of methods that have been proposed to correct for this bias and introduce three new methods. We present results from applying the various methods to two examples and consider extensions of the examples. We assess the methods with respect to bias of estimation of the treatment effect and compare the probabilities that a bias‐corrected treatment effect estimate will exceed a decision threshold. Following previous work, we also compare average power for the situation where a Phase 3 trial is launched given that the bias‐corrected observed Phase 2 treatment effect exceeds a launch threshold. Finally, we discuss our findings and potential application of the bias correction methods.  相似文献   

16.
In a cluster randomized controlled trial (RCT), the number of randomized units is typically considerably smaller than in trials where the unit of randomization is the patient. If the number of randomized clusters is small, there is a reasonable chance of baseline imbalance between the experimental and control groups. This imbalance threatens the validity of inferences regarding post‐treatment intervention effects unless an appropriate statistical adjustment is used. Here, we consider application of the propensity score adjustment for cluster RCTs. For the purpose of illustration, we apply the propensity adjustment to a cluster RCT that evaluated an intervention to reduce suicidal ideation and depression. This approach to adjusting imbalance had considerable bearing on the interpretation of results. A simulation study demonstrates that the propensity adjustment reduced well over 90% of the bias seen in unadjusted models for the specifications examined. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

18.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

20.
In this paper, the method of Hocking and Oxspring (1971) to estimate multinomial probabilities when full and partial data are available for some cells is extended to estimate the cell probabilities of a contingency table with structural zeros. The estimates are maximum likelihood, and the process is sequential. The gain in precision is due to the use of partial data and the bias of the estimates is also investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号