首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The T‐optimality criterion is used in optimal design to derive designs for model selection. To set up the method, it is required that one of the models is considered to be true. We term this local T‐optimality. In this work, we propose a generalisation of T‐optimality (termed robust T‐optimality) that relaxes the requirement that one of the candidate models is set as true. We then show an application to a nonlinear mixed effects model with two candidate non‐nested models and combine robust T‐optimality with robust D‐optimality. Optimal design under local T‐optimality was found to provide adequate power when the a priori assumed true model was the true model but poor power if the a priori assumed true model was not the true model. The robust T‐optimality method provided adequate power irrespective of which model was true. The robust T‐optimality method appears to have useful properties for nonlinear models, where both the parameter values and model structure are required to be known a priori, and the most likely model that would be applied to any new experiment is not known with certainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In many experiments, not all explanatory variables can be controlled. When the units arise sequentially, different approaches may be used. The authors study a natural sequential procedure for “marginally restricted” D‐optimal designs. They assume that one set of explanatory variables (x1) is observed sequentially, and that the experimenter responds by choosing an appropriate value of the explanatory variable x2. In order to solve the sequential problem a priori, the authors consider the problem of constructing optimal designs with a prior marginal distribution for x1. This eliminates the influence of units already observed on the next unit to be designed. They give explicit designs for various cases in which the mean response follows a linear regression model; they also consider a case study with a nonlinear logistic response. They find that the optimal strategy often consists of randomizing the assignment of the values of x2.  相似文献   

3.
One of the primary purposes of an oncology dose‐finding trial is to identify an optimal dose (OD) that is both tolerable and has an indication of therapeutic benefit for subjects in subsequent clinical trials. In addition, it is quite important to accelerate early stage trials to shorten the entire period of drug development. However, it is often challenging to make adaptive decisions of dose escalation and de‐escalation in a timely manner because of the fast accrual rate, the difference of outcome evaluation periods for efficacy and toxicity and the late‐onset outcomes. To solve these issues, we propose the time‐to‐event Bayesian optimal interval design to accelerate dose‐finding based on cumulative and pending data of both efficacy and toxicity. The new design, named “TITE‐BOIN‐ET” design, is nonparametric and a model‐assisted design. Thus, it is robust, much simpler, and easier to implement in actual oncology dose‐finding trials compared with the model‐based approaches. These characteristics are quite useful from a practical point of view. A simulation study shows that the TITE‐BOIN‐ET design has advantages compared with the model‐based approaches in both the percentage of correct OD selection and the average number of patients allocated to the ODs across a variety of realistic settings. In addition, the TITE‐BOIN‐ET design significantly shortens the trial duration compared with the designs without sequential enrollment and therefore has the potential to accelerate early stage dose‐finding trials.  相似文献   

4.
The authors study the estimation of domain totals and means under survey‐weighted regression imputation for missing items. They use two different approaches to inference: (i) design‐based with uniform response within classes; (ii) model‐assisted with ignorable response and an imputation model. They show that the imputed domain estimators are biased under (i) but approximately unbiased under (ii). They obtain a bias‐adjusted estimator that is approximately unbiased under (i) or (ii). They also derive linearization variance estimators. They report the results of a simulation study on the bias ratio and efficiency of alternative estimators, including a complete case estimator that requires the knowledge of response indicators.  相似文献   

5.
This article is devoted to the construction and asymptotic study of adaptive, group‐sequential, covariate‐adjusted randomized clinical trials analysed through the prism of the semiparametric methodology of targeted maximum likelihood estimation. We show how to build, as the data accrue group‐sequentially, a sampling design that targets a user‐supplied optimal covariate‐adjusted design. We also show how to carry out sound statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the independent and identically distributed setting only so far), and how group‐sequential testing applies on top of it. The procedure is robust (i.e. consistent even if the working model is mis‐specified). A simulation study confirms the theoretical results and validates the conjecture that the procedure may also be efficient.  相似文献   

6.
We consider the problem of the sequential choice of design points in an approximately linear model. It is assumed that the fitted linear model is only approximately correct, in that the true response function contains a nonrandom, unknown term orthogonal to the fitted response. We also assume that the parameters are estimated by M-estimation. The goal is to choose the next design point in such a way as to minimize the resulting integrated squared bias of the estimated response, to order n-1. Explicit applications to analysis of variance and regression are given. In a simulation study the sequential designs compare favourably with some fixed-sample-size designs which are optimal for the true response to which the sequential designs must adapt.  相似文献   

7.
Linear mixed‐effects models are a powerful tool for modelling longitudinal data and are widely used in practice. For a given set of covariates in a linear mixed‐effects model, selecting the covariance structure of random effects is an important problem. In this paper, we develop a joint likelihood‐based selection criterion. Our criterion is the approximately unbiased estimator of the expected Kullback–Leibler information. This criterion is also asymptotically optimal in the sense that for large samples, estimates based on the covariance matrix selected by the criterion minimize the approximate Kullback–Leibler information. Finite sample performance of the proposed method is assessed by simulation experiments. As an illustration, the criterion is applied to a data set from an AIDS clinical trial.  相似文献   

8.
Phase I clinical trials aim to identify a maximum tolerated dose (MTD), the highest possible dose that does not cause an unacceptable amount of toxicity in the patients. In trials of combination therapies, however, many different dose combinations may have a similar probability of causing a dose‐limiting toxicity, and hence, a number of MTDs may exist. Furthermore, escalation strategies in combination trials are more complex, with possible escalation/de‐escalation of either or both drugs. This paper investigates the properties of two existing proposed Bayesian adaptive models for combination therapy dose‐escalation when a number of different escalation strategies are applied. We assess operating characteristics through a series of simulation studies and show that strategies that only allow ‘non‐diagonal’ moves in the escalation process (that is, both drugs cannot increase simultaneously) are inefficient and identify fewer MTDs for Phase II comparisons. Such strategies tend to escalate a single agent first while keeping the other agent fixed, which can be a severe restriction when exploring dose surfaces using a limited sample size. Meanwhile, escalation designs based on Bayesian D‐optimality allow more varied experimentation around the dose space and, consequently, are better at identifying more MTDs. We argue that for Phase I combination trials it is sensible to take forward a number of identified MTDs for Phase II experimentation so that their efficacy can be directly compared. Researchers, therefore, need to carefully consider the escalation strategy and model that best allows the identification of these MTDs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
We propose a survey weighted quadratic inference function method for the analysis of data collected from longitudinal surveys, as an alternative to the survey weighted generalized estimating equation method. The procedure yields estimators of model parameters, which are shown to be consistent and have a limiting normal distribution. Furthermore, based on the inference function, a pseudolikelihood ratio type statistic for testing a composite hypothesis on model parameters and a statistic for testing the goodness of fit of the assumed model are proposed. We establish their asymptotic distributions as weighted sums of independent chi‐squared random variables and obtain Rao‐Scott corrections to those statistics leading to a chi‐squared distribution, approximately. We examine the performance of the proposed methods in a simulation study.  相似文献   

10.
11.
Classical regression analysis is usually performed in two steps. In the first step, an appropriate model is identified to describe the data generating process and in the second step, statistical inference is performed in the identified model. An intuitively appealing approach to the design of experiment for these different purposes are sequential strategies, which use parts of the sample for model identification and adapt the design according to the outcome of the identification steps. In this article, we investigate the finite sample properties of two sequential design strategies, which were recently proposed in the literature. A detailed comparison of sequential designs for model discrimination in several regression models is given by means of a simulation study. Some non-sequential designs are also included in the study.  相似文献   

12.
The authors construct locally optimal designs for the proportional odds model for ordinal data. While they investigate the standard D‐optimal design, they also investigate optimality criteria for the simultaneous estimation of multiple quantiles, namely DA ‐optimality and the omnibus criterion. The design of experiments for the simultaneous estimation of multiple quantiles is important in both toxic and effective dose studies in medicine. As with c‐optimality in the binary response problem, the authors find that there are distinct phase changes when exploring extreme quantiles that require additional design points. The authors also investigate relative efficiencies of the criteria.  相似文献   

13.
Abstract. A model‐based predictive estimator is proposed for the population proportions of a polychotomous response variable, based on a sample from the population and on auxiliary variables, whose values are known for the entire population. The responses for the non‐sample units are predicted using a multinomial logit model, which is a parametric function of the auxiliary variables. A bootstrap estimator is proposed for the variance of the predictive estimator, its consistency is proved and its small sample performance is compared with that of an analytical estimator. The proposed predictive estimator is compared with other available estimators, including model‐assisted ones, both in a simulation study involving different sampling designs and model mis‐specification, and using real data from an opinion survey. The results indicate that the prediction approach appears to use auxiliary information more efficiently than the model‐assisted approach.  相似文献   

14.
Cell‐based potency assays play an important role in the characterization of biopharmaceuticals but they can be challenging to develop in part because of greater inherent variability than other analytical methods. Our objective is to select concentrations on a dose–response curve that will enhance assay robustness. We apply the maximin D‐optimal design concept to the four‐parameter logistic (4PL) model and then derive and compute the maximin D‐optimal design for a challenging bioassay using curves representative of assay variation. The selected concentration points from this ‘best worst case’ design adequately fit a variety of 4PL shapes and demonstrate improved robustness. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In 2008, this group published a paper on approaches for two‐stage crossover bioequivalence (BE) studies that allowed for the reestimation of the second‐stage sample size based on the variance estimated from the first‐stage results. The sequential methods considered used an assumed GMR of 0.95 as part of the method for determining power and sample size. This note adds results for an assumed GMR = 0.90. Two of the methods recommended for GMR = 0.95 in the earlier paper have some unacceptable increases in Type I error rate when the GMR is changed to 0.90. If a sponsor wants to assume 0.90 for the GMR, Method D is recommended. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Understanding the dose–response relationship is a key objective in Phase II clinical development. Yet, designing a dose‐ranging trial is a challenging task, as it requires identifying the therapeutic window and the shape of the dose–response curve for a new drug on the basis of a limited number of doses. Adaptive designs have been proposed as a solution to improve both quality and efficiency of Phase II trials as they give the possibility to select the dose to be tested as the trial goes. In this article, we present a ‘shapebased’ two‐stage adaptive trial design where the doses to be tested in the second stage are determined based on the correlation observed between efficacy of the doses tested in the first stage and a set of pre‐specified candidate dose–response profiles. At the end of the trial, the data are analyzed using the generalized MCP‐Mod approach in order to account for model uncertainty. A simulation study shows that this approach gives more precise estimates of a desired target dose (e.g. ED70) than a single‐stage (fixed‐dose) design and performs as well as a two‐stage D‐optimal design. We present the results of an adaptive model‐based dose‐ranging trial in multiple sclerosis that motivated this research and was conducted using the presented methodology. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

18.
We construct approximate optimal designs for minimising absolute covariances between least‐squares estimators of the parameters (or linear functions of the parameters) of a linear model, thereby rendering relevant parameter estimators approximately uncorrelated with each other. In particular, we consider first the case of the covariance between two linear combinations. We also consider the case of two such covariances. For this we first set up a compound optimisation problem which we transform to one of maximising two functions of the design weights simultaneously. The approaches are formulated for a general regression model and are explored through some examples including one practical problem arising in chemistry.  相似文献   

19.
Often, single‐arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of ‘insufficient drug activity’ is rejected, the next step could be a randomised two‐arm trial. However, single‐arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single‐arm trials followed by randomised two‐arm trials with stand‐alone single‐stage or group sequential randomised two‐arm trials. Through this, we are able to investigate the utility of single‐arm trials and determine the most efficient drug development plans, setting our work in the context of a published single‐arm non‐small‐cell lung cancer trial. Reference priors, reflecting the opinions of ‘sceptical’ and ‘enthusiastic’ investigators, are used to quantify and guide the suitability of single‐arm trials in this setting. We observe that the explored development plans incorporating single‐arm trials are often non‐optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single‐arm trials avoided, for the non‐small‐cell lung cancer treatment through direct progression to a group sequential randomised two‐arm trial. Careful consideration should thus be given to the use of single‐arm trials in oncological drug development when a randomised trial will follow. Copyright © 2015 The Authors. Pharmaceutical Statistics published by JohnWiley & Sons Ltd.  相似文献   

20.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号