首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achieve a specific goal. Response-adaptive designs for clinical trials are usually constructed to achieve a single objective. In this paper, we develop a new adaptive allocation rule to improve current strategies for building response-adaptive designs to construct multiple-objective repeated measurement designs. This new rule is designed to increase estimation precision and treatment benefit by assigning more patients to a better treatment sequence. We demonstrate that designs constructed under the new proposed allocation rule can be nearly as efficient as fixed optimal designs in terms of the mean squared error, while leading to improved patient care.  相似文献   

2.
We consider response-adaptive design of clinical trials under a variance-penalized criterion in the presence of mismeasurement. An explicit expression for the variance-penalized criterion with misclassified dichotomous responses is derived for response-adaptive designs and some properties are discussed. A new target proportion of treatment allocation is proposed under the criterion and related simulation results are presented.  相似文献   

3.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

4.
A mixed model analysis of variance for multi-environment variety trials   总被引:1,自引:0,他引:1  
Of interest is the analysis of data resulting from a series of experiments repeated at several environments with the same set of plant varieties. Suppose that the experiments, multi-environment variety trials (as they are called), are all conducted in resolvable incomplete block designs. Adopting the randomization-derived mixed model obtained in Caliński et al. (Biometrics 61:448–455, 2005), a suitable analysis of variance methodology is considered and relevant test procedures are examined. The proposed methods are illustrated by the analysis of results of a series of trials with rye varieties.  相似文献   

5.
Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.  相似文献   

6.
Estimation and inference for dependent trials are important issues in response-adaptive allocation designs; maximum likelihood estimation is one route of interest. We present three noval response-driven designs and derive their maximum likelihood estimators. We also provide convenient regularity conditions that ensure the maximum likelihood estimator from a multiparameter stochastic process exists and is asymptotically multivariate normal. While these conditions may not be the most general, they are easily verified for our applications.  相似文献   

7.
Optimal response-adaptive designs in Phase III clinical trial set up are becoming more and more current interest. In the present article, an optimal response-adaptive design is introduced for more than two treatments at hand. We minimize an objective function subject to more than one inequality constraints. For this purpose, we propose an extensive computer search algorithm. The proposed procedure is illustrated with extensive numerical computation and simulations. Some real data set is used to illustrate the proposed methodology.  相似文献   

8.
The concept of fractional cointegration (Cheung and Lai in J Bus Econ Stat 11:103–112, 1993) has been introduced to generalize traditional cointegration (Engle and Granger in Econometrica 55:251–276, 1987) to the long memory framework. In this work we propose a test for fractional cointegration with the sieve bootstrap and compare by simulations the performance of our proposal with other semiparametric methods existing in literature: the three steps technique of Marinucci and Robinson (J Econom 105:225–247, 2001) and the procedure to determine the fractional cointegration rank of Robinson and Yajima (J Econom 106:217–241, 2002).  相似文献   

9.
Recurrent event data occur in many clinical and observational studies (Cook and Lawless, Analysis of recurrent event data, 2007) and in these situations, there may exist a terminal event such as death that is related to the recurrent event of interest (Ghosh and Lin, Biometrics 56:554–562, 2000; Wang et al., J Am Stat Assoc 96:1057–1065, 2001; Huang and Wang, J Am Stat Assoc 99:1153–1165, 2004; Ye et al., Biometrics 63:78–87, 2007). In addition, sometimes there may exist more than one type of recurrent events, that is, one faces multivariate recurrent event data with some dependent terminal event (Chen and Cook, Biostatistics 5:129–143, 2004). It is apparent that for the analysis of such data, one has to take into account the dependence both among different types of recurrent events and between the recurrent and terminal events. In this paper, we propose a joint modeling approach for regression analysis of the data and both finite and asymptotic properties of the resulting estimates of unknown parameters are established. The methodology is applied to a set of bivariate recurrent event data arising from a study of leukemia patients.  相似文献   

10.
In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.  相似文献   

11.
This paper discusses the analysis of interval-censored failure time data, which has recently attracted a great amount of attention (Li and Pu, Lifetime Data Anal 9:57–70, 2003; Sun, The statistical analysis of interval-censored data, 2006; Tian and Cai, Biometrika 93(2):329–342, 2006; Zhang et al., Can J Stat 33:61–70, 2005). Interval-censored data mean that the survival time of interest is observed only to belong to an interval and they occur in many fields including clinical trials, demographical studies, medical follow-up studies, public health studies and tumorgenicity experiments. A major difficulty with the analysis of interval-censored data is that one has to deal with a censoring mechanism that involves two related variables. For the inference, we present a transformation approach that transforms general interval-censored data into current status data, for which one only needs to deal with one censoring variable and the inference is thus much easy. We apply this general idea to regression analysis of interval-censored data using the additive hazards model and numerical studies indicate that the method performs well for practical situations. An illustrative example is provided.  相似文献   

12.
In this paper, we propose two new response-adaptive designs to use in a trial comparing treatments with continuous outcomes. Both designs assign more subjects to the better treatment on average. The new designs are compared with existing procedures and the equal allocation. The power of the treatment comparison is assessed.  相似文献   

13.
Group sequential tests have been effective tools in monitoring long term clinical trials. There have been several popular discrete sequential boundaries proposed for modeling interim analysis of clinical trials under the assumption of Brownian motion for the stochastic processes generated from test statistics. In this paper, we study the five sequential boundaries in Lan and DeMets (Biometrika 70:659–663, 1983) under the fractional Brownian motion. The fractional Brownian includes the classic Brownian motion as a special case. An example from a real data set is used to illustrate the applications of the boundaries.  相似文献   

14.
In estimating the proportion of people bearing a sensitive attribute A, say, in a given community, following Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, certain randomized response (RR) techniques are available for application. These are intended to ensure efficient and unbiased estimation protecting a respondent’s privacy when it touches a person’s socially stigmatizing feature like rash driving, tax evasion, induced abortion, testing HIV positive, etc. Lanke (Int Stat Rev 44:197–203, 1976), Leysieffer and Warner (J Am Stat Assoc 71:649–656, 1976), Anderson (Int Stat Rev 44:213–217, 1976, Scand J Stat 4:11–19, 1977) and Nayak (Commun Stat Theor Method 23:3303–3321, 1994) among others have discussed how maintenance of efficiency is in conflict with protection of privacy. In their RR-related activities the sample selection is traditionally by simple random sampling (SRS) with replacement (WR). In this paper, an extension of an essential similarity in case of general unequal probability sample selection even without replacement is reported. Large scale surveys overwhelmingly employ complex designs other than SRSWR. So extension of RR techniques to complex designs is essential and hence this paper principally refers to them. New jeopardy measures to protect revelation of secrecy presented here are needed as modifications of those in the literature covering SRSWR alone. Observing that multiple responses are feasible in addressing such a dichotomous situation especially with Kuk’s (Biometrika 77:436–438, 1990) and Christofides’ (Metrika 57:195–200, 2003) RR devices, an average of the response-specific jeopardizing measures is proposed. This measure which is device dependent, could be regarded as a technical characteristic of the device and it should be made known to the participants before they agree to use the randomization device. The views expressed are the authors’, not of the organizations they work for. Prof Chaudhuri’s research is partially supported by CSIR Grant No. 21(0539)/02/EMR-II.  相似文献   

15.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

16.
The sampling designs dependent on sample moments of auxiliary variables are well known. Lahiri (Bull Int Stat Inst 33:133–140, 1951) considered a sampling design proportionate to a sample mean of an auxiliary variable. Sing and Srivastava (Biometrika 67(1):205–209, 1980) proposed the sampling design proportionate to a sample variance while Wywiał (J Indian Stat Assoc 37:73–87, 1999) a sampling design proportionate to a sample generalized variance of auxiliary variables. Some other sampling designs dependent on moments of an auxiliary variable were considered e.g. in Wywiał (Some contributions to multivariate methods in, survey sampling. Katowice University of Economics, Katowice, 2003a); Stat Transit 4(5):779–798, 2000) where accuracy of some sampling strategies were compared, too.These sampling designs cannot be useful in the case when there are some censored observations of the auxiliary variable. Moreover, they can be much too sensitive to outliers observations. In these cases the sampling design proportionate to the order statistic of an auxiliary variable can be more useful. That is why such an unequal probability sampling design is proposed here. Its particular cases as well as its conditional version are considered, too. The sampling scheme implementing this sampling design is proposed. The inclusion probabilities of the first and second orders were evaluated. The well known Horvitz–Thompson estimator is taken into account. A ratio estimator dependent on an order statistic is constructed. It is similar to the well known ratio estimator based on the population and sample means. Moreover, it is an unbiased estimator of the population mean when the sample is drawn according to the proposed sampling design dependent on the appropriate order statistic.  相似文献   

17.
On pseudo-values for regression analysis in competing risks models   总被引:2,自引:2,他引:0  
For regression on state and transition probabilities in multi-state models Andersen et al. (Biometrika 90:15–27, 2003) propose a technique based on jackknife pseudo-values. In this article we analyze the pseudo-values suggested for competing risks models and prove some conjectures regarding their asymptotics (Klein and Andersen, Biometrics 61:223–229, 2005). The key is a second order von Mises expansion of the Aalen-Johansen estimator which yields an appropriate representation of the pseudo-values. The method is illustrated with data from a clinical study on total joint replacement. In the application we consider for comparison the estimates obtained with the Fine and Gray approach (J Am Stat Assoc 94:496–509, 1999) and also time-dependent solutions of pseudo-value regression equations.  相似文献   

18.
Use of full Bayesian decision-theoretic approaches to obtain optimal stopping rules for clinical trial designs typically requires the use of Backward Induction. However, the implementation of Backward Induction, apart from simple trial designs, is generally impossible due to analytical and computational difficulties. In this paper we present a numerical approximation of Backward Induction in a multiple-arm clinical trial design comparing k experimental treatments with a standard treatment where patient response is binary. We propose a novel stopping rule, denoted by τ p , as an approximation of the optimal stopping rule, using the optimal stopping rule of a single-arm clinical trial obtained by Backward Induction. We then present an example of a double-arm (k=2) clinical trial where we use a simulation-based algorithm together with τ p to estimate the expected utility of continuing and compare our estimates with exact values obtained by an implementation of Backward Induction. For trials with more than two treatment arms, we evaluate τ p by studying its operating characteristics in a three-arm trial example. Results from these examples show that our approximate trial design has attractive properties and hence offers a relevant solution to the problem posed by Backward Induction.  相似文献   

19.
Variable selection is an important issue in all regression analysis and in this paper, we discuss this in the context of regression analysis of recurrent event data. Recurrent event data often occur in long-term studies in which individuals may experience the events of interest more than once and their analysis has recently attracted a great deal of attention (Andersen et al., Statistical models based on counting processes, 1993; Cook and Lawless, Biometrics 52:1311–1323, 1996, The analysis of recurrent event data, 2007; Cook et al., Biometrics 52:557–571, 1996; Lawless and Nadeau, Technometrics 37:158-168, 1995; Lin et al., J R Stat Soc B 69:711–730, 2000). However, it seems that there are no established approaches to the variable selection with respect to recurrent event data. For the problem, we adopt the idea behind the nonconcave penalized likelihood approach proposed in Fan and Li (J Am Stat Assoc 96:1348–1360, 2001) and develop a nonconcave penalized estimating function approach. The proposed approach selects variables and estimates regression coefficients simultaneously and an algorithm is presented for this process. We show that the proposed approach performs as well as the oracle procedure in that it yields the estimates as if the correct submodel was known. Simulation studies are conducted for assessing the performance of the proposed approach and suggest that it works well for practical situations. The proposed methodology is illustrated by using the data from a chronic granulomatous disease study.  相似文献   

20.
Scale mixtures of normal distributions form a class of symmetric thick-tailed distributions that includes the normal one as a special case. In this paper we consider local influence analysis for measurement error models (MEM) when the random error and the unobserved value of the covariates jointly follow scale mixtures of normal distributions, providing an appealing robust alternative to the usual Gaussian process in measurement error models. In order to avoid difficulties in estimating the parameter of the mixing variable, we fixed it previously, as recommended by Lange et al. (J Am Stat Assoc 84:881–896, 1989) and Berkane et al. (Comput Stat Data Anal 18:255–267, 1994). The local influence method is used to assess the robustness aspects of the parameter estimates under some usual perturbation schemes. However, as the observed log-likelihood associated with this model involves some integrals, Cook’s well–known approach may be hard to apply to obtain measures of local influence. Instead, we develop local influence measures following the approach of Zhu and Lee (J R Stat Soc Ser B 63:121–126, 2001), which is based on the EM algorithm. Results obtained from a real data set are reported, illustrating the usefulness of the proposed methodology, its relative simplicity, adaptability and practical usage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号