首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Correlated binary data arise in many ophthalmological and otolaryngological clinical trials. To test the homogeneity of prevalences among different groups is an important issue when conducting these trials. The equal correlation coefficients model proposed by Donner in 1989 is a popular model handling correlated binary data. The asymptotic chi-square test works well when the sample size is large. However, it would fail to maintain the type I error rate when the sample size is relatively small. In this paper, we propose several exact methods to deal with small sample scenarios. Their performances are compared with respect to type I error rate and power. The ‘M approach’ and the ‘E + M approach’ seem to outperform the others. A real work example is given to further explain how these approaches work. Finally, the computational efficiency of the exact methods is discussed as a pressing issue of future work.  相似文献   

2.
Traditionally, the bioequivalence of a generic drug with the innovator's product is assessed by comparing their pharmacokinetic profiles determined from the blood or plasma concentration-time curves. This method may only be applicable to formulations where blood drug or metabolites levels adequately characterize absorption and metabolism. For non-systematic drugs categorized by the lack of systemic presence, such as metered dose inhalers (MDI), anti-ulcer agents and topical antifungals and vaginal antifungals, new definition of therapeutic equivalency and criteria for acceptance should be used. When pharmacologic effects of the drugs can be easily measured, pharmacodynamic effect studies can be used to assess the therapeutic equivalence of non-systemic drugs. When analytical methods or other tests cannot be developed to permit use of the pharmacodynamic method, clinical trials to compare one or several clinical endpoints may be the only suitable method to establishing therapeutic equivalence. In this paper we evaluate by Monte-Carlo simulations the fixed sample performances of some two one-sided tests procedures which may be used to assess the therapeutic equivalence of non-systemic drugs with binary clinical endpoints. Formulae of sample size determination for therapeutic equivalence clinical trials are also given.  相似文献   

3.
The phase II clinical trials often use the binary outcome. Thus, accessing the success rate of the treatment is a primary objective for the phase II clinical trials. Reporting confidence intervals is a common practice for clinical trials. Due to the group sequence design and relatively small sample size, many existing confidence intervals for phase II trials are much conservative. In this paper, we propose a class of confidence intervals for binary outcomes. We also provide a general theory to assess the coverage of confidence intervals for discrete distributions, and hence make recommendations for choosing the parameter in calculating the confidence interval. The proposed method is applied to Simon's [14] optimal two-stage design with numerical studies. The proposed method can be viewed as an alternative approach for the confidence interval for discrete distributions in general.  相似文献   

4.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
A new method to perform meta - analysis of controlled clinical trials with binary response variable is developed using a Bayesian approach. It consists of three parts: (1) For each trial, the risk difference (the proportion of successes in the treated group minus the proportion of successes in the control group) is estimated; (2) The homogeneity of the risk difference among the different trials is tested; and (3) The hypothesis - the effect of the treatment for the homogeneous pool of trials is greater than or equal to a given fixed constant - is tested. The performance of the Bayesian procedure to test the homogeneity of the risk difference among trials is compared with the chi - square test proposed by DerSimonian and Laird (Controlled Clinical Trials 7, 177-188, 1986) by means of pseudo - random simulation. The conclusion was that the Bayes test is more reliable, either in its exact or asymptotic versions, since it makes fewer decision errors than the chi-square test. As an illustration, the Bayesian method is applied to data of chemotherapeutic prophylaxis of superficial bladder cancer.  相似文献   

6.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
In randomized clinical trials or observational studies, subjects are recruited at multiple treating sites. Factors that vary across sites may have some influence on outcomes; therefore, they need to be taken into account to get better results. We apply the accelerated failure time (AFT) model with linear mixed effects to analyze failure time data, accounting for correlations between outcomes. Specifically, we use Bayesian approach to fit the data, computing the regression parameters by Gibbs sampler combined with Buckley-James method. This approach is compared with the marginal independence approach and other methods through simulations and an application to a real example.  相似文献   

8.
Summary.  In magazine advertisements for new drugs, it is common to see summary tables that compare the relative frequency of several side-effects for the drug and for a placebo, based on results from placebo-controlled clinical trials. The paper summarizes ways to conduct a global test of equality of the population proportions for the drug and the vector of population proportions for the placebo. For multivariate normal responses, the Hotelling T 2-test is a well-known method for testing equality of a vector of means for two independent samples. The tests in the paper are analogues of this test for vectors of binary responses. The likelihood ratio tests can be computationally intensive or have poor asymptotic performance. Simple quadratic forms comparing the two vectors provide alternative tests. Much better performance results from using a score-type version with a null-estimated covariance matrix than from the sample covariance matrix that applies with an ordinary Wald test. For either type of statistic, asymptotic inference is often inadequate, so we also present alternative, exact permutation tests. Follow-up inferences are also discussed, and our methods are applied to safety data from a phase II clinical trial.  相似文献   

9.
We consider the square contingency tables which arise when the same method of classification is applied twice. The hypothesis of marginal homogeneity is then relevant! and can be tested by various methods Models are discussed which contain marginal homogeneity as a special case. They include a class based on univariate and bivariate Dirichlet distributions. The question of ordered categories is briefly discussed. Applications are made to data on unaided distance vision.  相似文献   

10.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

11.
Two-treatment multicentre clinical trials are very common in practice. In cases where a non-parametric analysis is appropriate, a rank-sum test for grouped data called the van Elteren test can be applied. As an alternative approach, one may apply a combination test such as Fisher's combination test or the inverse normal combination test (also called Liptak's method) in order to combine centre-specific P-values. If there are no ties and no differences between centres with regard to the groups’ sample sizes, the inverse normal combination test using centre-specific Wilcoxon rank-sum tests is equivalent to the van Elteren test. In this paper, the van Elteren test is compared with Fisher's combination test based on Wilcoxon rank-sum tests. Data from two multicentre trials as well as simulated data indicate that Fisher's combination of P-values is more powerful than the van Elteren test in realistic scenarios, i.e. when there are large differences between the centres’ P-values, some quantitative interaction between treatment and centre, and/or heterogeneity in variability. The combination approach opens the possibility of using statistics other than the rank sum, and it is also a suitable method for more complicated designs, e.g. when covariates such as age or gender are included in the analysis.  相似文献   

12.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   

13.
A late‐stage clinical development program typically contains multiple trials. Conventionally, the program's success or failure may not be known until the completion of all trials. Nowadays, interim analyses are often used to allow evaluation for early success and/or futility for each individual study by calculating conditional power, predictive power and other indexes. It presents a good opportunity for us to estimate the probability of program success (POPS) for the entire clinical development earlier. The sponsor may abandon the program early if the estimated POPS is very low and therefore permit resource savings and reallocation to other products. We provide a method to calculate probability of success (POS) at an individual study level and also POPS for clinical programs with multiple trials in binary outcomes. Methods for calculating variation and confidence measures of POS and POPS and timing for interim analysis will be discussed and evaluated through simulations. We also illustrate our approaches on historical data retrospectively from a completed clinical program for depression. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
This paper is concerned with the problem of simultaneously monitoring the process mean and process variability of continuous production processes using combined Shewhart-cumulative score (cuscore) quality control procedures developed by Ncube and Woodall (1984). Two methods of approach are developed and their properties are investigated. One method uses two separate Shewhart-cuscore control charts, one for determining shifts in the process mean and the other for detecting shifts in process variability. The other method uses a single combined statistic which is sensitive to shifts in both the mean and the variance. Each procedure is compared to the corresponding Shewhart schemes. It will be shown by average run length calculations that the proposed Shewhart- cuscore schemes are considerably more efficient than the comparative Shewhart procedures for certain shifts in the process mean and process variability for the case when the underlying process control variable is assumed to be normally distributed.  相似文献   

15.
Protocol amendments are often necessary in clinical trials. They can change the entry criteria and, therefore, the population. Simply analysing the pooled data is not acceptable. Instead, each phase should be analysed separately and a combination test such as Fisher's test should be applied to the resulting p-values. In this situation, an asymmetric decision rule is not appropriate. Therefore, we propose a modification of Bauer and Köhne's test. We compare this new test with the tests of Liptak, Fisher, Bauer/Köhne and Edgington. In case of differences in variance only or only small differences in mean, Liptak's Z-score approach is the best, and the new test keeps up with the rest and is in most cases slightly superior. In other situations, the new test and the Z-score approach are not preferable. But no big differences in populations are usually to be expected due to amendments. Then, the new method is a recommendable alternative.  相似文献   

16.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Phase I clinical trials aim to identify a maximum tolerated dose (MTD), the highest possible dose that does not cause an unacceptable amount of toxicity in the patients. In trials of combination therapies, however, many different dose combinations may have a similar probability of causing a dose‐limiting toxicity, and hence, a number of MTDs may exist. Furthermore, escalation strategies in combination trials are more complex, with possible escalation/de‐escalation of either or both drugs. This paper investigates the properties of two existing proposed Bayesian adaptive models for combination therapy dose‐escalation when a number of different escalation strategies are applied. We assess operating characteristics through a series of simulation studies and show that strategies that only allow ‘non‐diagonal’ moves in the escalation process (that is, both drugs cannot increase simultaneously) are inefficient and identify fewer MTDs for Phase II comparisons. Such strategies tend to escalate a single agent first while keeping the other agent fixed, which can be a severe restriction when exploring dose surfaces using a limited sample size. Meanwhile, escalation designs based on Bayesian D‐optimality allow more varied experimentation around the dose space and, consequently, are better at identifying more MTDs. We argue that for Phase I combination trials it is sensible to take forward a number of identified MTDs for Phase II experimentation so that their efficacy can be directly compared. Researchers, therefore, need to carefully consider the escalation strategy and model that best allows the identification of these MTDs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the procedure, a global test is performed for the null hypothesis that the expected means of all treatment groups under study are equal. If this global null hypothesis can be rejected at the pre-specified level of significance, then in the second step of the procedure, one is permitted in principle to perform all pairwise comparisons at the same level of significance (although in practice, not all of them may be of primary interest). Fisher's LSD procedure is known to preserve the experimentwise type I error rate at the nominal level of significance, if (and only if) the number of treatment groups is three. The procedure may therefore be applied to phase III clinical trials comparing two doses of an active treatment against placebo in the confirmatory sense (while in this case, no confirmatory comparison has to be performed between the two active treatment groups). The power properties of this approach are examined in the present paper. It is shown that the power of the first step global test--and therefore the power of the overall procedure--may be relevantly lower than the power of the pairwise comparison between the more-favourable active dose group and placebo. Achieving a certain overall power for this comparison with Fisher's LSD procedure--irrespective of the effect size at the less-favourable dose group--may require slightly larger treatment groups than sizing the study with respect to the simple Bonferroni alpha adjustment. Therefore if Fisher's LSD procedure is used to avoid an alpha adjustment for phase III clinical trials, the potential loss of power due to the first-step global test should be considered at the planning stage.  相似文献   

19.
The authors extend Fisher's method of combining two independent test statistics to test homogeneity of several two‐parameter populations. They explore two procedures combining asymptotically independent test statistics: the first pools two likelihood ratio statistics and the other, score test statistics. They then give specific results to test homogeneity of several normal, negative binomial or beta‐binomial populations. Their simulations provide evidence that in this context, Fisher's method performs generally well, even when the statistics to be combined are only asymptotically independent. They are led to recommend Fisher's test based on score statistics, since the latter have simple forms, are easy to calculate, and have uniformly good level properties.  相似文献   

20.
Classes of distribution-free tests are proposed for testing homogeneity against order restricted as well as unrestricted alternatives in randomized block designs with multiple observations per cell. Allowing for different interblock scoring schemes, these tests are constructed based on the method of within block rankings. Asymptotic distributions (cell sizes tending to infinity) of these tests are derived under the assumption of homogeneity. The Pitman asymptotic relative efficiencies relative to the least squares statistics are studied. It is shown that when blocks are governed by different distributions, adaptive choice of scores within each block results in asymptotically more efficient tests as compared with methods that ignore such information. Monte Carlo simulations of selected designs indicate that the method of within block rankings is more power robust with respect to differing block distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号