首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes.  相似文献   

2.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   

3.
We present likelihood methods for defining the non-inferiority margin and measuring the strength of evidence in non-inferiority trials using the 'fixed-margin' framework. Likelihood methods are used to (1) evaluate and combine the evidence from historical trials to define the non-inferiority margin, (2) assess and report the smallest non-inferiority margin supported by the data, and (3) assess potential violations of the constancy assumption. Data from six aspirin-controlled trials for acute coronary syndrome and data from an active-controlled trial for acute coronary syndrome, Organisation to Assess Strategies for Ischemic Syndromes (OASIS-2) trial, are used for illustration. The likelihood framework offers important theoretical and practical advantages when measuring the strength of evidence in non-inferiority trials. Besides eliminating the influence of sample spaces and prior probabilities on the 'strength of evidence in the data', the likelihood approach maintains good frequentist properties. Violations of the constancy assumption can be assessed in the likelihood framework when it is appropriate to assume a unifying regression model for trial data and a constant control effect including a control rate parameter and a placebo rate parameter across historical placebo controlled trials and the non-inferiority trial. In situations where the statistical non-inferiority margin is data driven, lower likelihood support interval limits provide plausibly conservative candidate margins.  相似文献   

4.
《统计学通讯:理论与方法》2012,41(16-17):3150-3161
We consider a new approach to deal with non ignorable non response on an outcome variable, in a causal inference framework. Assuming that a binary instrumental variable for non response is available, we provide a likelihood-based approach to identify and estimate heterogeneous causal effects of a binary treatment on specific latent subgroups of units, named principal strata, defined by the non response behavior under each level of the treatment and of the instrument. We show that, within each stratum, non response is ignorable and respondents can be properly compared by treatment status. In order to assess our method and its robustness when the usually invoked assumptions are relaxed or misspecified, we simulate data to resemble a real experiment conducted on a panel survey which compares different methods of reducing panel attrition.  相似文献   

5.
We derived two methods to estimate the logistic regression coefficients in a meta-analysis when only the 'aggregate' data (mean values) from each study are available. The estimators we proposed are the discriminant function estimator and the reverse Taylor series approximation. These two methods of estimation gave similar estimators using an example of individual data. However, when aggregate data were used, the discriminant function estimators were quite different from the other two estimators. A simulation study was then performed to evaluate the performance of these two estimators as well as the estimator obtained from the model that simply uses the aggregate data in a logistic regression model. The simulation study showed that all three estimators are biased. The bias increases as the variance of the covariate increases. The distribution type of the covariates also affects the bias. In general, the estimator from the logistic regression using the aggregate data has less bias and better coverage probabilities than the other two estimators. We concluded that analysts should be cautious in using aggregate data to estimate the parameters of the logistic regression model for the underlying individual data.  相似文献   

6.
We consider the statistical evaluation and estimation of vaccine efficacy when the protective effect wanes with time. We reanalyse data from a 5-year trial of two oral cholera vaccines in Matlab, Bangladesh. In this field trial, one vaccine appears to confer better initial protection than the other, but neither appears to offer protection for a period longer than about 3 years. Time-dependent vaccine effects are estimated by obtaining smooth estimates of a time-varying relative risk RR( t ) using survival analysis. We compare two approaches based on the Cox model in terms of their strategies for detecting time-varying vaccine effects, and their estimation techniques for obtaining a time-dependent RR( t ) estimate. These methods allow an exploration of time-varying vaccine effects while making minimal parametric assumptions about the functional form of RR( t ) for vaccinated compared wit unvaccinated subjects.  相似文献   

7.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

8.
Assessment of non-inferiority is often performed using a one-sided statistical test through an analogous one-sided confidence limit. When the focus of attention is the difference in success rates between test and active control proportions, the lower confidence limit is computed, and many methods exist in the literature to address this objective. This paper considers methods which have been shown to be popular in the literature and have surfaced in this research as having good performance with respect to controlling type I error at the specified level. Performance of these methods is assessed with respect to power and type I error through simulations. Sample size considerations are also included to aid in the planning stages of non-inferiority trials focusing on the difference in proportions. Results suggest that the appropriate method to use depends on the sample size allocation of subjects in the test and active control groups.  相似文献   

9.
In modern quality engineering, dual response surface methodology is a powerful tool to model an industrial process by using both the mean and the standard deviation of the measurements as the responses. The least squares method in regression is often used to estimate the coefficients in the mean and standard deviation models, and various decision criteria are proposed by researchers to find the optimal conditions. Based on the inherent hierarchical structure of the dual response problems, we propose a Bayesian hierarchical approach to model dual response surfaces. Such an approach is compared with two frequentist least squares methods by using two real data sets and simulated data.  相似文献   

10.
In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit.  相似文献   

11.
Adjusting for covariates makes efficient use of data and can improve the precision of study results or even reduce sample sizes. There is no easy way to adjust for covariates in a non-inferiority study for which the margin is defined as a risk difference. Adjustment is straightforward on the logit scale, but reviews of clinical studies suggest that the analysis is more often conducted on the more interpretable risk-difference scale. We examined four methods that allow for adjustment on the risk-difference scale: stratified analysis with Cochran-Mantel-Haenszel (CMH) weights, binomial regression with an identity link, the use of a Taylor approximation to convert results from the logit to the risk-difference scale and converting the risk-difference margin to the odds-ratio scale. These methods were compared using simulated data based on trials in HIV. We found that the CMH had the best trade-off between increased efficiency in the presence of predictive covariates and problems in analysis at extreme response rates. These results were shared with regulatory agencies in Europe and the USA, and the advice received is described.  相似文献   

12.
The confidence interval (CI) for the difference between two proportions has been an important and active research topic, especially in the context of non-inferiority hypothesis testing. Issues concerning the Type 1 error rate, power, coverage rate and aberrations have been extensively studied for non-stratified cases. However, stratified confidence intervals are frequently used in non-inferiority trials and similar settings. In this paper, several methods for stratified confidence intervals for the difference between two proportions, including existing methods and novel extensions from unstratified CIs, are evaluated across different scenarios. When sparsity across the strata is not a concern, adding imputed observations to the stratification analysis can strengthen Type-1 error control without substantial loss of power. When sparseness of data is a concern, most of the evaluated methods fail to control Type-1 error; the modified stratified t-test CI is an exception. We recommend the modified stratified t-test CI as the most useful and flexible method across the respective scenarios; the modified stratified Wald CI may be useful in settings where sparsity is unlikely. These findings substantially contribute to the application of stratified CIs for non-inferiority testing of differences between two proportions.  相似文献   

13.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

14.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

15.
In 1965 Warner pioneered randomized response techniques to estimate the proportion of people bearing a sensitive characteristic. He restricted applying his randomized response device to gather data on sensitive issues from respondents chosen by simple random sampling with replacement (SRSWR). It has spawned numerous ramifications. We present results for the situation where the distinct persons chosen in an SRSWR are identified but each one independently gives a randomized response by Warner’s device, repeated as many times as he/she is selected. Two new estimators are proposed for the sensitive proportion and compared against relevant competitors.  相似文献   

16.
The increasing concern of antibacterial resistance has been well documented, as has the relative lack of antibiotic development. This paradox is in part due to challenges with clinical development of antibiotics. Because of their rapid progression, untreated bacterial infections are associated with significant morbidity and mortality. As a consequence, placebo-controlled studies of new agents are unethical. Rather, pivotal development studies are mostly conducted using non-inferiority designs versus an active comparator. Further, infections because of comparator-resistant isolates must usually be excluded from the trial programme. Unfortunately, the placebo-controlled data classically used in support of non-inferiority designs are largely unavailable for antibiotics. The only available data are from the 1930s and 1940s and their use is associated with significant concerns regarding constancy and assay sensitivity. Extended public debate on this challenge has led to proposed solutions by some in which these concerns are addressed by using very conservative approaches to trial design, endpoints and non-inferiority margins, in some cases leading to potentially impractical studies. To compound this challenge, different Regulatory Authorities seem to be taking different approaches to these key issues. If harmonisation does not occur, antibiotic development will become increasingly challenging, with the risk of further decreases in the amount of antibiotic drug development. However with clarity on Regulatory requirements and an ability to feasibly conduct global development programmes, it should be possible to bring much needed additional antibiotics to patients.  相似文献   

17.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

18.
Summary.  The paper develops a data augmentation method to estimate the distribution function of a variable, which is partially observed, under a non-ignorable missing data mechanism, and where surrogate data are available. An application to the estimation of hourly pay distributions using UK Labour Force Survey data provides the main motivation. In addition to considering a standard parametric data augmentation method, we consider the use of hot deck imputation methods as part of the data augmentation procedure to improve the robustness of the method. The method proposed is compared with standard methods that are based on an ignorable missing data mechanism, both in a simulation study and in the Labour Force Survey application. The focus is on reducing bias in point estimation, but variance estimation using multiple imputation is also considered briefly.  相似文献   

19.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

20.
Summary.  We present a general method of adjustment for non-ignorable non-response in studies where one or more further attempts are made to contact initial non-responders. A logistic regression model relates the probability of response at each contact attempt to covariates and outcomes of interest. We assume that the effect of these covariates and outcomes on the probability of response is the same at all contact attempts. Knowledge of the number of contact attempts enables estimation of the model by using only information from the respondents and the number of non-responders. Three approaches for fitting the response models and estimating parameters of substantive interest and their standard errors are compared: a modified conditional likelihood method in which the fitted inverse probabilities of response are used in weighted analyses for the outcomes of interest, an EM procedure with the Louis formula and a Bayesian approach using Markov chain Monte Carlo methods. We further propose the creation of several sets of weights to incorporate uncertainty in the probability weights in subsequent analyses. Our methods are applied as a sensitivity analysis to a postal survey of symptoms in Persian Gulf War veterans and other servicemen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号