首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Urn models are popular for response adaptive designs in clinical studies. Among different urn models, Ivanova's drop-the-loser rule is capable of producing superior adaptive treatment allocation schemes. Ivanova [2003. A play-the-winner-type urn model with reduced variability. Metrika 58, 1–13] obtained the asymptotic normality only for two treatments. Recently, Zhang et al. [2007. Generalized drop-the-loser urn for clinical trials with delayed responses. Statist. Sinica, in press] extended the drop-the-loser rule to tackle more general circumstances. However, their discussion is also limited to only two treatments. In this paper, the drop-the-loser rule is generalized to multi-treatment clinical trials, and delayed responses are allowed. Moreover, the rule can be used to target any desired pre-specified allocation proportion. Asymptotic properties, including strong consistency and asymptotic normality, are also established for general multi-treatment cases.  相似文献   

2.
Ranked set sample sign test for quantiles   总被引:2,自引:0,他引:2  
A ranked set sample version of the sign test is proposed for testing hypotheses concerning the quantiles of a population characteristic. Both equal and unequal allocations are considered and the relative performance of different allocations is assessed in terms of Pitman's asymptotic relative efficiency. In particular, for each quantile, the allocation that maximizes the efficacy is identified and shown to not depend on the population distribution.  相似文献   

3.
In the present work, we formulate a two-treatment single period two-stage adaptive allocation design for achieving larger allocation proportion to the better treatment arm in the course of the trial with increased precision of the parameter estimator. We examine some properties of the proposed rule and compare it with some of the existing allocation rules and report substantial gain in efficiency with a considerably larger number of allocations to the better treatment even for moderate sample sizes.  相似文献   

4.
In this paper, we study the bioequivalence (BE) inference problem motivated by pharmacokinetic data that were collected using the serial sampling technique. In serial sampling designs, subjects are independently assigned to one of the two drugs; each subject can be sampled only once, and data are collected at K distinct timepoints from multiple subjects. We consider design and hypothesis testing for the parameter of interest: the area under the concentration–time curve (AUC). Decision rules in demonstrating BE were established using an equivalence test for either the ratio or logarithmic difference of two AUCs. The proposed t-test can deal with cases where two AUCs have unequal variances. To control for the type I error rate, the involved degrees-of-freedom were adjusted using Satterthwaite's approximation. A power formula was derived to allow the determination of necessary sample sizes. Simulation results show that, when the two AUCs have unequal variances, the type I error rate is better controlled by the proposed method compared with a method that only handles equal variances. We also propose an unequal subject allocation method that improves the power relative to that of the equal and symmetric allocation. The methods are illustrated using practical examples.  相似文献   

5.
For time‐to‐event data, the power of the two sample logrank test for the comparison of two treatment groups can be greatly influenced by the ratio of the number of patients in each of the treatment groups. Despite the possible loss of power, unequal allocations may be of interest due to a need to collect more data on one of the groups or to considerations related to the acceptability of the treatments to patients. Investigators pursuing such designs may be interested in the cost of the unbalanced design relative to a balanced design with respect to the total number of patients required for the study. We present graphical displays to illustrate the sample size adjustment factor, or ratio of the sample size required by an unequal allocation compared to the sample size required by a balanced allocation, for various survival rates, treatment hazards ratios, and sample size allocation ratios. These graphical displays conveniently summarize information in the literature and provide a useful tool for planning sample sizes for the two sample logrank test. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Using 1998 and 1999 singleton birth data of the State of Florida, we study the stability of classification trees. Tree stability depends on both the learning algorithm and the specific data set. In this study, test samples are used in statistical learning to evaluate both stability and predictive performance. We also use the resampling technique bootstrap, which can be regarded as data self-perturbation, to evaluate the sensitivity of the modeling algorithm with respect to the specific data set. We demonstrate that the selection of the cost function plays an important role in stability. In particular, classifiers with equal misclassification costs and equal priors are less stable compared to those with unequal misclassification costs and equal priors.  相似文献   

7.
A multi-arm response-adaptive allocation design is developed for circular treatment outcomes. Several exact and asymptotic properties of the design are studied. Stage-wise treatment selection procedures based on the proposed response-adaptive design are also suggested to exclude the worse performing treatment(s) at earlier stages. Detailed simulation study is carried out to evaluate the proposed selection procedures. The applicability of the proposed methodologies is illustrated through a real clinical trial data on cataract surgery.  相似文献   

8.
In recent years adaptive designs have been becoming popular in the context of clinical trials. The purpose of the present work is to provide a sequential two-treatment allocation rule for when the response variables are continuous. The rule is ethical as well as sometimes optimal depending upon the nature of the distribution of the study variables. We examine the various properties of the rule.  相似文献   

9.

The power of Pearson's chi-square test for uniformity depends heavily on the choice of the partition of the unit interval involved in the form of the test statistic. We propose a selection rule which chooses a proper partition based on the data. This selection rule leads usually to essentially unequal cells well suited to the observed distribution. We investigate the corresponding data driven chi-square test and present a Monte Carlo simulation study. The conclusion is that this test achieves a high and very stable power for a large class of alternatives, and is much more stable than any other test we compare to.  相似文献   

10.
《随机性模型》2013,29(1):109-137
The well-known Move-to-Front rule for self-organizing lists is modified to a so-called Move-to-Partner rule for dynamic task allocation on a chain of processors. We enable an analysis of the sequence of expected communication costs by reducing their computation to that of three-dimensional arrays of certain probabilities, for which a recursion formula and an asymptotic expansion can be given. The costs depend on the underlying communication profile; two types of such profiles are investigated in more detail, namely a two-dimensional version of Zipf's law, and profiles based on a class structure of tasks. In the considered cases, the Move-to-Partner rule effects fast convergence to a stationary state, but comparably high expected stationary costs.  相似文献   

11.
A technique of systematically allocating a sample to the strata formed by double stratification is presented. The method can proportionally allocate the sample along each variable of stratification. If there are R strata and C strata for the first and second variable of stratification respectively, the technique requires that the total sample size be at least as large as max(R, C). An unbiased estimator of the population mean is given and its variance is obtained. The technique is compared with a random allocation procedure given by Bryant, Hartley, and Jessen (1960). Numerical examples are given suggesting when one technique is superior to the other.  相似文献   

12.
To reduce the loss of efficiency in the Neyman allocation caused by using the estimators instead of the unknown strata standard deviations of population, we suggest a compromise allocation that the Neyman allocation using an estimator of the pooled standard deviation of combined strata and the proportional allocation are used together. It is shown that the compromise allocation makes the estimator more efficient than the proportional allocation and the Neyman allocation using the estimated strata standard deviations. Simulation study is carried out for the numerical comparison and the results are reported.  相似文献   

13.
We study the design of multi-armed parallel group clinical trials to estimate personalized treatment rules that identify the best treatment for a given patient with given covariates. Assuming that the outcomes in each treatment arm are given by a homoscedastic linear model, with possibly different variances between treatment arms, and that the trial subjects form a random sample from an unselected overall population, we optimize the (possibly randomized) treatment allocation allowing the allocation rates to depend on the covariates. We find that, for the case of two treatments, the approximately optimal allocation rule does not depend on the value of the covariates but only on the variances of the responses. In contrast, for the case of three treatments or more, the optimal treatment allocation does depend on the values of the covariates as well as the true regression coefficients. The methods are illustrated with a recently published dietary clinical trial.  相似文献   

14.
In stratified sample surveys, the problem of determining the optimum allocation is well known due to articles published in 1923 by Tschuprow and in 1934 by Neyman. The articles suggest the optimum sample sizes to be selected from each stratum for which sampling variance of the estimator is minimum for fixed total cost of the survey or the cost is minimum for a fixed precision of the estimator. If in a sample survey more than one characteristic is to be measured on each selected unit of the sample, that is, the survey is a multi-response survey, then the problem of determining the optimum sample sizes to various strata becomes more complex because of the non-availability of a single optimality criterion that suits all the characteristics. Many authors discussed compromise criterion that provides a compromise allocation, which is optimum for all characteristics, at least in some sense. Almost all of these authors worked out the compromise allocation by minimizing some function of the sampling variances of the estimators under a single cost constraint. A serious objection to this approach is that the variances are not unit free so that minimizing any function of variances may not be an appropriate objective to obtain a compromise allocation. This fact suggests the use of coefficient of variations instead of variances. In the present article, the problem of compromise allocation is formulated as a multi-objective non-linear programming problem. By linearizing the non-linear objective functions at their individual optima, the problem is approximated to an integer linear programming problem. Goal programming technique is then used to obtain a solution to the approximated problem.  相似文献   

15.
Let π01,…,πk be k+1 independent populations. For i=0,1,…,ki has the densit f(xi), where the (unknown) parameter θi belongs to an interval of the real line. Our goal is to select from π1,… πk (experimental treatments) those populations, if any, that are better (suitably defined) than π0 which is the control population. A locally optimal rule is derived in the class of rules for which Pr(πi is selected)γi, i=1,…,k, when θ01=?=θk. The criterion used for local optimality amounts to maximizing the efficiency in a certain sense of the rule in picking out the superior populations for specific configurations of θ=(θ0,…,θk) in a neighborhood of an equiparameter configuration. The general result is then applied to the following special cases: (a) normal means comparison — common known variance, (b) normal means comparison — common unknown variance, (c) gamma scale parameters comparison — known (unequal) shape parameters, and (d) comparison of regression slopes. In all these cases, the rule is obtained based on samples of unequal sizes.  相似文献   

16.
The variance of the Horvitz–Thompson estimator for a fixed size Conditional Poisson sampling scheme without replacement and with unequal inclusion probabilities is compared to the variance of the Hansen–Hurwitz estimator for a sampling scheme with replacement. We show, using a theorem by Gabler, that the sampling design without replacement is more efficient than the sampling design with replacement.  相似文献   

17.
Let (X1,…,Xk) be a multinomial vector with unknown cell probabilities (p1,?,pk). A subset of the cells is to be selected in a way so that the cell associated with the smallest cell probability is included in the selected subset with a preassigned probability, P1. Suppose the loss is measured by the size of the selected subset, S. Using linear programming techniques, selection rules can be constructed which are minimax with respect to S in the class of rules which satisfy the P1-condition. In some situations, the rule constructed by this method is the rule proposed by Nagel (1970). Similar techniques also work for selection in terms of the largest cell probability.  相似文献   

18.
Sampford sampling is a method for unequal probability sampling. There exist several implementations of the Sampford sampling design which all are rejective methods, i.e. the sample is not always accepted. Thus the existing methods can be time consuming or even infeasible in some situations. In this paper, a fast and non-rejective list-sequential method, which works in all situations, is presented. The method is a modification of a previously existing rejective list-sequential method. Another list-sequential implementation of Sampford sampling is also presented.  相似文献   

19.
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule.  相似文献   

20.
A supersaturated design (SSD) is a design whose run size is not enough for estimating all the main effects. The goal in conducting such a design is to identify, presumably only a few, relatively dominant active effects with a cost as low as possible. However, data analysis of such designs remains primitive: traditional approaches are not appropriate in such a situation and several methods which were proposed in the literature in recent years are effective when used to analyze two-level SSDs. In this paper, we introduce a variable selection procedure, called the PLSVS method, to screen active effects in mixed-level SSDs based on the variable importance in projection which is an important concept in the partial least-squares regression. Simulation studies show that this procedure is effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号