首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A common problem in randomized controlled clinical trials is the optimal assignment of patients to treatment protocols, The traditional optimal design assumes a single criterion, although in reality, there are usually more than one objective in a clinical trial. In this paper, optimal treatment allocation schemes are found for a dual-objective clinical trial with a binary response. A graphical method for finding the optimal strategy is proposed and illustrative examples are discussed.  相似文献   

2.
Studies on event occurrence may be conducted in experiments, where one or more treatment groups are compared to a control group. Most of the randomized trials are designed with equally sized groups, but this design is not always the best one. The statistical power of the study may be larger with unequal sample sizes, and researchers may want to place more participants in one group relative to the other due to resource constraints or costs. The optimal designs for discrete-time survival endpoints in trials with two groups, where different proportions of subjects in the experimental group are taken into account, can be studied using the generalized linear model. Applying a cost function, the optimal combination of the number of subjects and periods in the study and the optimal allocation ratio can be found. It is observed that the ratio of the recruitment costs in both groups, the ratio of the recruitment cost in the control group to the cost of obtaining a measurement, the size of the treatment effect, and the shape of the survival distribution have the greatest influence on the optimal design.  相似文献   

3.
Pragmatic trials offer practical means of obtaining real-world evidence to help improve decision-making in comparative effectiveness settings. Unfortunately, incomplete adherence is a common problem in pragmatic trials. The commonly used methods in randomized control trials often cannot handle the added complexity imposed by incomplete adherence, resulting in biased estimates. Several naive methods and advanced causal inference methods (e.g., inverse probability weighting and instrumental variable-based approaches) have been used in the literature to deal with incomplete adherence. Practitioners and applied researchers are often confused about which method to consider under a given setting. This current work is aimed to review commonly used statistical methods to deal with non-adherence along with their key assumptions, advantages, and limitations, with a particular focus on pragmatic trials. We have listed the applicable settings for these methods and provided a summary of available software. All methods were applied to two hypothetical datasets to demonstrate how these methods perform in a given scenario, along with the R codes. The key considerations include the type of intervention strategy (point treatment settings, where treatment is administered only once versus sustained treatment settings, where treatment has to be continued over time) and availability of data (e.g., the extent of measured or unmeasured covariates that are associated with adherence, dependent confounding impacted by past treatment, and potential violation of assumptions). This study will guide practitioners and applied researchers to use the appropriate statistical method to address incomplete adherence in pragmatic trial settings for both the point and sustained treatment strategies.  相似文献   

4.
We present schemes for the allocation of subjects to treatment groups, in the presence of prognostic factors. The allocations are robust against incorrectly specified regression responses, and against possible heteroscedasticity. Assignment probabilities which minimize the asymptotic variance are obtained. Under certain conditions these are shown to be minimax (with respect to asymptotic mean squared error) as well. We propose a method of sequentially modifying the associated assignment rule, so as to address both variance and bias in finite samples. The resulting scheme is assessed in a simulation study. We find that, relative to common competitors, the robust allocation schemes can result in significant decreases in the mean squared error when the fitted models are biased, at a minimal cost in efficiency when in fact the fitted models are correct.  相似文献   

5.
ABSTRACT

Just as Bayes extensions of the frequentist optimal allocation design have been developed for the two-group case, we provide a Bayes extension of optimal allocation in the three-group case. We use the optimal allocations derived by Jeon and Hu [Optimal adaptive designs for binary response trials with three treatments. Statist Biopharm Res. 2010;2(3):310–318] and estimate success probabilities for each treatment arm using a Bayes estimator. We also introduce a natural lead-in design that allows adaptation to begin as early in the trial as possible. Simulation studies show that the Bayesian adaptive designs simultaneously increase the power and expected number of successfully treated patients compared to the balanced design. And compared to the standard adaptive design, the natural lead-in design introduced in this study produces a higher expected number of successes whilst preserving power.  相似文献   

6.
The randomized cluster design is typical in studies where the unit of randomization is a cluster of individuals rather than the individual. Evaluating various intervention strategies across medical care providers at either an institutional level or at a physician group practice level fits the randomized cluster model. Clearly, the analytical approach to such studies must take the unit of randomization and accompanying intraclass correlation into consideration. We review alternative methods to the typical Pearson's chi-square analysis and illustrate these alternatives. We have written and tested a Fortran program that produces the statistics outlined in this paper. The program, in an executable format is available from the author on request.  相似文献   

7.
In the longitudinal studies with binary response, it is often of interest to estimate the percentage of positive responses at each time point and the percentage of having at least one positive response by each time point. When missing data exist, the conventional method based on observed percentages could result in erroneous estimates. This study demonstrates two methods of using expectation-maximization (EM) and data augmentation (DA) algorithms in the estimation of the marginal and cumulative probabilities for incomplete longitudinal binary response data. Both methods provide unbiased estimates when the missingness mechanism is missing at random (MAR) assumption. Sensitivity analyses have been performed for cases when the MAR assumption is in question.  相似文献   

8.
Crossover designs are popular in early phases of clinical trials and in bioavailability and bioequivalence studies. Assessment of carryover effects, in addition to the treatment effects, is a critical issue in crossover trails. The observed data from a crossover trial can be incomplete because of potential dropouts. A joint model for analyzing incomplete data from crossover trials is proposed in this article; the model includes a measurement model and an outcome dependent informative model for the dropout process. The informative-dropout model is compared with the ignorable-dropout model as specific cases of the latter are nested subcases of the proposed joint model. Markov chain sampling methods are used for Bayesian analysis of this model. The joint model is used to analyze depression score data from a clinical trial in women with late luteal phase dysphoric disorder. Interestingly, carryover effect is found to have a strong effect in the informative dropout model, but it is less significant when dropout is considered ignorable.  相似文献   

9.
A new analytic statistical technique for predictive event modeling in ongoing multicenter clinical trials with waiting time to response is developed. It allows for the predictive mean and predictive bounds for the number of events to be constructed over time, accounting for the newly recruited patients and patients already at risk in the trial, and for different recruitment scenarios. For modeling patient recruitment, an advanced Poisson-gamma model is used, which accounts for the variation in recruitment over time, the variation in recruitment rates between different centers and the opening or closing of some centers in the future. A few models for event appearance allowing for 'recurrence', 'death' and 'lost-to-follow-up' events and using finite Markov chains in continuous time are considered. To predict the number of future events over time for an ongoing trial at some interim time, the parameters of the recruitment and event models are estimated using current data and then the predictive recruitment rates in each center are adjusted using individual data and Bayesian re-estimation. For a typical scenario (continue to recruit during some time interval, then stop recruitment and wait until a particular number of events happens), the closed-form expressions for the predictive mean and predictive bounds of the number of events at any future time point are derived under the assumptions of Markovian behavior of the event progression. The technique is efficiently applied to modeling different scenarios for some ongoing oncology trials. Case studies are considered.  相似文献   

10.
A general family of dynamic treatment allocations is defined, and it is shown that the permuted block procedure (Zelen 1974) and Begg and Iglewicz method (1980) are extreme choices in this family. A compromise method is suggested. The framework of this general family allows the relationships between these methods to be examined. By means of a simulation study these three methods plus the complete randomization method are compared in terms of efficiency and balance. The compromise method is shown to have good overall properties. In addition, an illustrative example is given  相似文献   

11.
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design.  相似文献   

12.
Power analysis for multi-center randomized control trials is quite difficult to perform for non-continuous responses when site differences are modeled by random effects using the generalized linear mixed-effects model (GLMM). First, it is not possible to construct power functions analytically, because of the extreme complexity of the sampling distribution of parameter estimates. Second, Monte Carlo (MC) simulation, a popular option for estimating power for complex models, does not work within the current context because of a lack of methods and software packages that would provide reliable estimates for fitting such GLMMs. For example, even statistical packages from software giants like SAS do not provide reliable estimates at the time of writing. Another major limitation of MC simulation is the lengthy running time, especially for complex models such as GLMM, especially when estimating power for multiple scenarios of interest. We present a new approach to address such limitations. The proposed approach defines a marginal model to approximate the GLMM and estimates power without relying on MC simulation. The approach is illustrated with both real and simulated data, with the simulation study demonstrating good performance of the method.  相似文献   

13.
Concerns about potentially misleading reporting of pharmaceutical industry research have surfaced many times. The potential for duality (and thereby conflict) of interest is only too clear when you consider the sums of money required for the discovery, development and commercialization of new medicines. As the ability of major, mid-size and small pharmaceutical companies to innovate has waned, as evidenced by the seemingly relentless decline in the numbers of new medicines approved by Food and Drug Administration and European Medicines Agency year-on-year, not only has the cost per new approved medicine risen: so too has the public and media concern about the extent to which the pharmaceutical industry is open and honest about the efficacy, safety and quality of the drugs we manufacture and sell. In 2005 an Editorial in Journal of the American Medical Association made clear that, so great was their concern about misleading reporting of industry-sponsored studies, henceforth no article would be published that was not also guaranteed by independent statistical analysis. We examine the precursors to this Editorial, as well as its immediate and lasting effects for statisticians, for the manner in which statistical analysis is carried out, and for the industry more generally.  相似文献   

14.
It is well known that Gaussian maximum likelihood estimates of time series models are not robust. In this paper we prove this is also the case for the Generalized Autoregressive Conditional Heteroscedastic (GARCH) models. By expressing the Gaussian maximum likelihood estimates as Ψ estimates and by assuming the existence of a contaminated process, we prove they possess zero breakdown point and unbounded influence curves. By simulating GARCH processes under several proportions of contaminations we assess how much biased the maximum likelihood estimates may become and compare these results to a robust alternative. The t-student maximum likelihood estimates of GARCH models are also considered.  相似文献   

15.
Investigators who manage multicenter clinical trials need to pay careful attention to patterns of subject accrual, and the prediction of activation time for pending centers is potentially crucial for subject accrual prediction. We propose a Bayesian hierarchical model to predict subject accrual for multicenter clinical trials in which center activation times vary. We define center activation time as the time at which a center can begin enrolling patients in the trial. The difference in activation times between centers is assumed to follow an exponential distribution, and the model of subject accrual integrates prior information for the study with actual enrollment progress. We apply our proposed Bayesian multicenter accrual model to two multicenter clinical studies. The first is the PAIN‐CONTRoLS study, a multicenter clinical trial with a goal of activating 40 centers and enrolling 400 patients within 104 weeks. The second is the HOBIT trial, a multicenter clinical trial with a goal of activating 14 centers and enrolling 200 subjects within 36 months. In summary, the Bayesian multicenter accrual model provides a prediction of subject accrual while accounting for both center‐ and individual patient‐level variation.  相似文献   

16.
Sequential analyses in clinical trials have ethical and economic advantages over fixed sample size methods. The sequential probability ratio test (SPRT) is a hypothesis testing procedure which evaluates data as it is collected. The original SPRT was developed by Wald for one-parameter families of distributions and later extended by Bartlett to handle the case of nuisance parameters. However, Bartlett's SPRT requires independent and identically distributed observations. In this paper we show that Bartlett's SPRT can be applied to generalized linear model (GLM) contexts. Then we propose an SPRT analysis methodology for a Poisson generalized linear mixed model (GLMM) that is suitable for our application to the design of a multicenter randomized clinical trial that compares two preventive treatments for surgical site infections. We validate the methodology with a simulation study that includes a comparison to Neyman–Pearson and Bayesian fixed sample size test designs and the Wald SPRT.  相似文献   

17.
Abstract

In this paper, the problem of obtaining efficient block designs for incomplete factorial treatment structure with two factors excluding one treatment combination for estimation of dual versus single treatment contrasts is considered. The designs have been obtained using the A-optimal completely randomized designs and modified strongest treatment interchange algorithm. A catalog of efficient block designs has been prepared for m1?=?3, 4 and m2?=?2, b?≤?10 and k?≤?9 and for m1?=?3,4 and m2?=?3, 4, b?≤?10 and k?≤?10.  相似文献   

18.
Dose proportionality/linearity is a desirable property in pharmacokinetic studies. Various methods have been proposed for its assessment. When dose proportionality is not established, it is of interest to evaluate the degree of departure from dose linearity. In this paper, we propose a measure of departure from dose linearity and derive an asymptotic test under a repeated measures incomplete block design using a slope approach. Simulation studies show that the proposed method has a satisfactory small sample performance in terms of size and power.  相似文献   

19.
20.
Summary This article develops a rank based inference using a dispersion function for repeated measures incomplete block designs (IBD) with baseline values as covariates. Scores, Waldtype and drop in dispersion tests are developed for testing slope equals zero and equality of treatment effects. Multiple comparison procedures are also developed usingR-estimators which are obtained by minimizing a piece-wise linear dispersion function. A consistent estimator of a scale parameter, which appears in test statistic as a standardizing constant, is discussed. A data set from pharmaceutical research, which compares 12μg and 24μg formoterol (asthma drug) solution aerosol with a placebo treatment, is analyzed using the result of this article. Part of this work was completed when the author was a faculty member at Worcester Polytechnic Institute. Worcester, Massachusetts. The view expressed in this article are those of the author and not those of the United States. Food and Drug Administration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号