首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
In the present work, whenever the response variables are binary, we frame an adaptive allocation rule for a two-treatment two-period crossover design in the presence of possible carry-over effects. The proposed rule is a combination of the play-the-winner and randomized play-the-winner rules. We study various properties of the proposed rule through asymptotics and simulations. Some related inferential problems are also considered. The proposed procedure is compared with some possible competitor.  相似文献   

2.
Adaptative designs for clinical trials that are based on a generalization of the “play-the-winner” rule are considered as an alternative to previously developed models. Theoretical and numerical results show that these designs perform better for the usual criteria. Bayesian methods are proposed for the statistical analysis of these designs.  相似文献   

3.
For comparing two treatment effects in a clinical trial under a univariate set-up, a sampling design called randomized play-the-winner (RPW) rule was used by different authors (see, e.g., Wei, 1979Wei, 1988; Wei and Durham, 1978). The objective of using such a rule was to allocate more patients to the better treatment. The present work suggests a bivariate version of the RPW rule. The rule is used to propose some sequential-type nonparametric tests for the equivalence of two bivariate treatment effects against a class of restricted alternatives. Some exact and asymptotic results associated with the tests are studied and examined. The limiting proportions of allocations to the two treatments are also obtained.  相似文献   

4.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achieve a specific goal. Response-adaptive designs for clinical trials are usually constructed to achieve a single objective. In this paper, we develop a new adaptive allocation rule to improve current strategies for building response-adaptive designs to construct multiple-objective repeated measurement designs. This new rule is designed to increase estimation precision and treatment benefit by assigning more patients to a better treatment sequence. We demonstrate that designs constructed under the new proposed allocation rule can be nearly as efficient as fixed optimal designs in terms of the mean squared error, while leading to improved patient care.  相似文献   

5.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
If nonresponse and/or untruthful answering mechanisms occur, analyzing only the available cases may substantially weaken the validity of sample results. The paper starts with a reference to strategies of empirical social researchers related to respondent cooperation in surveys embedding the statistical techniques of randomized response in this framework. Further, multi-stage randomized response techniques are incorporated into the standardized randomized response technique for estimating proportions. In addition to already existing questioning designs of this family of methods, this generalization includes also several (in particular: two-stage) techniques that have not been published before. The statistical properties of this generalized design are discussed for all probability sampling designs. Further, the efficiency of the model is presented as a function of privacy protection. Hence, it can be shown that not one multi-stage design of this family at the same level of privacy protection can theoretically be more efficient than its one-stage basic version.  相似文献   

7.
The efficient design of experiments for comparing a control with v new treatments when the data are dependent is investigated. We concentrate on generalized least-squares estimation for a known covariance structure. We consider block sizes k equal to 3 or 4 and approximate designs. This method may lead to exact optimal designs for some v, b, k, but usually will only indicate the structure of an efficient design for any particular v, b, k, and yield an efficiency bound, usually unattainable. The bound and the structure can then be used to investigate efficient finite designs.  相似文献   

8.
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments.  相似文献   

9.
Customer slowdown describes the phenomenon that a customer’s service requirement increases with experienced delay. In healthcare settings, there is substantial empirical evidence for slowdown, particularly when a patient’s delay exceeds a certain threshold. For such threshold slowdown situations, we design and analyze a many-server system that leads to a two-dimensional Markov process. Analysis of this system leads to insights into the potentially detrimental effects of slowdown, especially in heavy-traffic conditions. We quantify the consequences of underprovisioning due to neglecting slowdown, demonstrate the presence of a subtle bistable system behavior, and discuss in detail the snowball effect: A delayed customer has an increased service requirement, causing longer delays for other customers, who in turn due to slowdown might require longer service times.  相似文献   

10.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

11.
We find optimal designs for linear models using a novel algorithm that iteratively combines a semidefinite programming (SDP) approach with adaptive grid techniques. The proposed algorithm is also adapted to find locally optimal designs for nonlinear models. The search space is first discretized, and SDP is applied to find the optimal design based on the initial grid. The points in the next grid set are points that maximize the dispersion function of the SDP-generated optimal design using nonlinear programming. The procedure is repeated until a user-specified stopping rule is reached. The proposed algorithm is broadly applicable, and we demonstrate its flexibility using (i) models with one or more variables and (ii) differentiable design criteria, such as A-, D-optimality, and non-differentiable criterion like E-optimality, including the mathematically more challenging case when the minimum eigenvalue of the information matrix of the optimal design has geometric multiplicity larger than 1. Our algorithm is computationally efficient because it is based on mathematical programming tools and so optimality is assured at each stage; it also exploits the convexity of the problems whenever possible. Using several linear and nonlinear models with one or more factors, we show the proposed algorithm can efficiently find optimal designs.  相似文献   

12.
Urn models are popular for response adaptive designs in clinical studies. Among different urn models, Ivanova's drop-the-loser rule is capable of producing superior adaptive treatment allocation schemes. Ivanova [2003. A play-the-winner-type urn model with reduced variability. Metrika 58, 1–13] obtained the asymptotic normality only for two treatments. Recently, Zhang et al. [2007. Generalized drop-the-loser urn for clinical trials with delayed responses. Statist. Sinica, in press] extended the drop-the-loser rule to tackle more general circumstances. However, their discussion is also limited to only two treatments. In this paper, the drop-the-loser rule is generalized to multi-treatment clinical trials, and delayed responses are allowed. Moreover, the rule can be used to target any desired pre-specified allocation proportion. Asymptotic properties, including strong consistency and asymptotic normality, are also established for general multi-treatment cases.  相似文献   

13.
We consider the problem of constructing good two-level nonregular fractional factorial designs. The criteria of minimum G and G2 aberration are used to rank designs. A general design structure is utilized to provide a solution to this practical, yet challenging, problem. With the help of this design structure, we develop an efficient algorithm for obtaining a collection of good designs based on the aforementioned two criteria. Finally, we present some results for designs of 32 and 40 runs obtained from applying this algorithmic approach.  相似文献   

14.
The augmented Box–Behnken designs are used in the situations in which Box–Behnken designs (BBDs) could not estimate the response surface model due to the presence of third-order terms in the response surface models. These designs are too large for experimental use. Usually experimenters prefer small response surface designs in order to save time, cost, and resources; therefore, using combinations of fractional BBD points, factorial design points, axial design points, and complementary design points, we augment these designs and develop new third-order response surface designs known as augmented fractional BBDs (AFBBDs). These AFBBDs have less design points and are more efficient than augmented BBDs.  相似文献   

15.
The problem of comparing v test treatments simultaneously with a control treatment when k, v ⩾ 3 is considered. Following the work of Majumdar (1992), we use exact design theory to derive Bayes A-optimal block designs and optimal Г-minimax designs for a more general prior assumption for the one-way elimination of heterogeneity model. Examples of robust optimal designs, highly efficient designs, and the comparisons of the approximate optimal designs that are derived by our methods and by some other existing rounding-off schemes when using Owen's procedure are also provided.  相似文献   

16.
Crossover designs, or repeated measurements designs, are used for experiments in which t treatments are applied to each of n experimental units successively over p time periods. Such experiments are widely used in areas such as clinical trials, experimental psychology and agricultural field trials. In addition to the direct effect on the response of the treatment in the period of application, there is also the possible presence of a residual, or carry-over, effect of a treatment from one or more previous periods. We use a model in which the residual effect from a treatment depends upon the treatment applied in the succeeding period; that is, a model which includes interactions between the treatment direct and residual effects. We assume that residual effects do not persist further than one succeeding period.A particular class of strongly balanced repeated measurements designs with n=t2 units and which are uniform on the periods is examined. A lower bound for the A-efficiency of the designs for estimating the direct effects is derived and it is shown that such designs are highly efficient for any number of periods p=2,…,2t.  相似文献   

17.
Most growth curves can only be used to model the tumor growth under no intervention. To model the growth curves for treated tumor, both the growth delay due to the treatment and the regrowth of the tumor after the treatment need to be taken into account. In this paper, we consider two tumor regrowth models and determine the locally D- and c-optimal designs for these models. We then show that the locally D- and c-optimal designs are minimally supported. We also consider two equally spaced designs as alternative designs and evaluate their efficiencies.  相似文献   

18.
We consider a bandit process with delayed responses which are exponentially distributed survival times. The objective is to maximize the expected value of the total response from all selections. We formulate the problem and show that the optimal strategy is characterized by a sequence of break-even values. A monotonicity property of this sequence is derived, which implies the non-optimality of the myopic strategy and a special optimal stopping solution. An example is included to illustrate a possible application of the main results.  相似文献   

19.
We introduce a new design for dose-finding in the context of toxicity studies for which it is assumed that toxicity increases with dose. The goal is to identify the maximum tolerated dose, which is taken to be the dose associated with a prespecified “target” toxicity rate. The decision to decrease, increase or repeat a dose for the next subject depends on how far an estimated toxicity rate at the current dose is from the target. The size of the window within which the current dose will be repeated is obtained based on the theory of Markov chains as applied to group up-and-down designs. But whereas the treatment allocation rule in Markovian group up-and-down designs is only based on information from the current cohort of subjects, the treatment allocation rule for the proposed design is based on the cumulative information at the current dose. We then consider an extension of this new design for clinical trials in which the subject's outcome is not known immediately. The new design is compared to the continual reassessment method.  相似文献   

20.
In comparing two treatments, suppose the suitable subjects arrive sequentially and must be treated at once. Known or unknown to the experimenter there may be nuisance factors systematically affecting the subjects. Accidental bias is a measure of the influence of these factors in the analysis of data. We show in this paper that the random allocation design minimizes the accidental bias among all designs that allocate n, out of 2n, subjects to each treatment and do not prefer either treatment in the assignment. When the final imbalance is allowed to be nonzero, optimal and efficient designs are given. In particular the random allocation design is shown to be very efficient in this broader setup.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号