首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Phase I clinical trials are designed to study several doses of the same drug in a small group of patients to determine the maximum tolerated dose (MTD), which is defined as the dose that is associated with dose-limiting toxicity (DLT) in a desired fraction Γ of patients. Durham and Flournoy [5 Durham, S. and Flournoy, N. 1995. Up-and-down designs I: Stationary treatment distributions. IMS Lecture Notes – Monogr. Ser., 25: 139157.  [Google Scholar]] proposed the biased coin design (BCD), which is an up-and-down design that assigns a new patient to a dose depending upon whether or not the current patient experienced a DLT. However, the BCD in its standard form requires the complete follow-up of the current patient before the new patient can be assigned a dose. In situations where patients’ follow-up times are relatively long compared to patient inter-arrival times, the BCD will result in an impractically long trial and cause patients to either have delayed entry into the trial or refusal of entry altogether. We propose an adaptive accelerated BCD (aaBCD) that generalizes the traditional BCD design algorithm by incorporating an adaptive weight function based upon the amount of follow-up of each enrolled patient. By doing so, the dose assignment for each eligible patient can be determined immediately with no delay, leading to a shorter trial overall. We show, via simulation, that the frequency of correctly identifying the MTD at the end of the study with the aaBCD, as well as the number of patients assigned to the MTD, are comparable to that of the traditional BCD design. We also compare the performance of the aaBCD with the accelerated BCD (ABCD) of Stylianou and Follman [19 Stylianou, M. and Follmann, D. A. 2004. The accelerated biased coin up-and-down design in phase I trials. J. Biopharmaceutical Statist., 14: 249260. [Taylor & Francis Online] [Google Scholar]], as well as the time-to-event continual reassessment method (TITE-CRM) of Cheung and Chappell [4 Cheung, Y. K. and Chappell, R. 2000. Sequential designs for phase I clinical trials with late-onset toxicities. Biometrics, 56: 11771182. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]].  相似文献   

2.
Outlining some recently obtained results of Hu and Rosenberger [2003. Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. J. Amer. Statist. Assoc. 98, 671–678] and Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835] on the relationship between sequential randomized designs and the power of the usual statistical procedures for testing the equivalence of two competing treatments, the aim of this paper is to provide theoretical proofs of the numerical results of Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835]. Furthermore, we prove that the Adjustable Biased Coin Design [Baldi Antognini A., Giovagnoli, A., 2004. A new “biased coin design” for the sequential allocation of two treatments. J. Roy. Statist. Soc. Ser. C 53, 651–664] is uniformly more powerful than the other “coin” designs proposed in the literature for any sample size.  相似文献   

3.
For comparing treatments in clinical trials, Atkinson (1982) introduced optimal biased coins for balancing patients across treatment assignments by using D-optimality under the assumption of homoscedastic responses of different treatments. However, this assumption can be violated in many real applications. In this paper, we relax the homoscedasticity assumption in the k treatments setting with k>2. A general family of optimal response adaptive biased coin designs are proposed following Atkinson's procedure. Asymptotic properties of the proposed designs are obtained. Some advantages of the proposed design are discussed.  相似文献   

4.
Efron's biased coin design (BCD) is a well‐known randomization technique that helps neutralize selection bias, while keeping the experiment fairly balanced for every sample size. Several extensions of this rule have been proposed, and their properties were analyzed from an asymptotic viewpoint and compared via simulations in a finite setup. The aim of this paper is to push forward these comparisons by taking also into account the adjustable BCD, which is never considered up to now. Firstly, we show that the adjustable BCD performs better than Efron's coin with respect to both loss of precision and randomness. Moreover, the adjustable BCD is always more balanced than the other coins and, only for some sample sizes, slightly more predictable. Therefore, we suggest the dominant BCD, namely a new and flexible class of procedures that can change the allocation rule step by step in order to ensure very good performance in terms of both balance and selection bias for any sample size. Our simulations demonstrate that the dominant BCD is more balanced and, at the same time, less or equally predictable than Atkinson's optimum BCD. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In clinical studies, patients are usually accrued sequentially. Response‐adaptive designs are then useful tools for assigning treatments to incoming patients as a function of the treatment responses observed thus far. In this regard, doubly adaptive biased coin designs have advantageous properties under the assumption that their responses can be obtained immediately after testing. However, it is a common occurrence that responses are observed only after a certain period of time. The authors examine the effect of delayed responses on doubly adaptive biased coin designs and derive some of their asymptotic properties. It turns out that these designs are relatively insensitive to delayed responses under widely satisfied conditions. This is illustrated with a simulation study.  相似文献   

6.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   

7.
We propose a method for assigning treatment in clinical trials, called the 'biased coin adaptive within-subject' (BCAWS) design: during the course of follow-up, the subject's response to a treatment is used to influence the future treatment, through a 'biased coin' algorithm. This design results in treatment patterns that are closer to actual clinical practice and may be more acceptable to patients with chronic disease than the usual fixed trial regimens, which often suffer from drop-out and non-adherence. In this work, we show how to use the BCAWS design to compare treatment strategies, and we provide a simple example to illustrate the method.  相似文献   

8.
When testing a hypothesis with a nuisance parameter present only under the alternative, the maximum of a test statistic over the nuisance parameter space has been proposed. Different upper bounds for the one-sided tail probabilities of the maximum tests were provided. Davies (1977. Biometrika 64, 247–254) studied the problem when the parameter space is an interval, while Efron (1997. Biometrika 84, 143–157) considered the problem with some finite points of the parameter space and obtained a W-formula. We study the limiting bound of Efron's W-formula when the number of points in the parameter space goes to infinity. The conditions under which the limiting bound of the W-formula is identical to that of Davies are given. The results are also extended to two-sided tests. Examples are used to illustrate the conditions, including case-control genetic association studies. Efficient calculations of upper bounds for the tail probability with finite points in the parameter space are described.  相似文献   

9.
10.
Statistical process monitoring (SPM) has been used extensively recently in order to assure the quality of the output of industrial processes. Techniques of SPM have been efficiently applied during the last two decades in non‐industrial processes. A field of application with great interest is public health monitoring, where a pitfall with which we have to deal is the fact that available samples are not random in all cases. In the majority of cases, we monitor measurements derived from patient admissions to a hospital against control limits that were calculated using a sample of data taken from an epidemiological survey. In this work, we bridge the gap of a change in the sampling scheme from Phase I to Phase II, studying the case where the sampling during Phase II is biased. We present the appropriate methodology and then apply extensive numerical simulation in order to explore the performance of the proposed methodology, for measurements following various asymmetrical distributions. As the simulations show, the proposed methodology has a significantly better performance than the standard procedure.  相似文献   

11.
The Bayesian paradigm provides an ideal platform to update uncertainties and carry them over into the future in the presence of data. Bayesian predictive power (BPP) reflects our belief in the eventual success of a clinical trial to meet its goals. In this paper we derive mathematical expressions for the most common types of outcomes, to make the BPP accessible to practitioners, facilitate fast computations in adaptive trial design simulations that use interim futility monitoring, and propose an organized BPP-based phase II-to-phase III design framework.  相似文献   

12.
13.
Much of the literature on matching problems in statistics has focused on single items chosen from independent, but fully overlapping sets. This paper considers a more general problem of sampling without replacement from partially overlapping sets and presents new theory on probability problems involving two urns and the selection of balls from these urns according to a biased without-replacement sampling scheme. The strength of the sampling bias is first considered as known, and then as unknown, with a discussion of how that strength may be assessed using observed data. Each scenario is illustrated with a numerical example, and the theory is applied to a gender bias problem in committee selection, and to a problem where competing retailers select products to place on promotion.  相似文献   

14.
This study examined the influence of heterogeneity of variance on Type I error rates and power of the independent-samples Student's t-test of equality of means on samples of scores from normal and 10 non-normal distributions. The same test of equality of means was performed on corresponding rank-transformed scores. For many non-normal distributions, both versions produced anomalous power functions, resulting partly from the fact that the hypothesis test was biased, so that under some conditions, the probability of rejecting H 0 decreased as the difference between means increased. In all cases where bias occurred, the t-test on ranks exhibited substantially greater bias than the t-test on scores. This anomalous result was independent of the more familiar changes in Type I error rates and power attributable to unequal sample sizes combined with unequal variances.  相似文献   

15.
This paper explores testing procedures with response-related incomplete data, with particular attention centered to pseudolikelihood ratio tests. We construct pseudolikelihood functions with the biased observations supplemented by auxiliary information, without specifying the association between the primary variables and the auxiliary variables. The asymptotic distributions of the test statistics under the null hypothesis are derived and finite-sample properties of the testing procedures are examined via simulation. The methodology is illustrated with an example involving evaluation of kindergarten readiness skills in children with sickle cell disease.  相似文献   

16.
17.
One of the main problems in geostatistics is fitting a valid variogram or covariogram model in order to describe the underlying dependence structure in the data. The dependence between observations can be also modeled in the spectral domain, but the traditional methods based on the periodogram as an estimator of the spectral density may present some problems for the spatial case. In this work, we propose an estimation method for the covariogram parameters based on the fast Fourier transform (FFT) of biased covariances. The performance of this estimator for finite samples is compared through a simulation study with other classical methods stated in spatial domain, such as weighted least squares and maximum likelihood, as well as with other spectral estimators. Additionally, an example of application to real data is given.  相似文献   

18.
Several authors have conjectured, on the basis of their numerical work, that the maximum likelihood estimators of the shape and scale parameters of the Gamma distribution are positively biased. It is proved that their conjecture is always true.  相似文献   

19.
In two-sample comparison problems it is often of interest to examine whether one distribution function majorises the other, that is, for the presence of stochastic ordering. This paper develops a nonparametric test for stochastic ordering from size-biased data, allowing the pattern of the size bias to differ between the two samples. The test is formulated in terms of a maximally selected local empirical likelihood statistic. A Gaussian multiplier bootstrap is devised to calibrate the test. Simulation results show that the proposed test outperforms an analogous Wald-type test, and that it provides substantially greater power over ignoring the size bias. The approach is illustrated using data on blood alcohol concentration of drivers involved in car accidents, where the size bias is due to drunker drivers being more likely to be involved in accidents. Further, younger drivers tend to be more affected by alcohol, so in making comparisons with older drivers the analysis is adjusted for differences in the patterns of size bias.  相似文献   

20.
We examine the rationale of prospective logistic regression analysis for pair-matched case-control data using explicit, parametric terms for matching variables in the model. We show that this approach can yield inconsistent estimates for the disease-exposure odds ratio, even in large samples. Some special conditions are given under which the bias for the disease-exposure odds ratio is small. It is because these conditions are not too uncommon that this flawed analytic method appears to possess an (unreasonable) effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号