首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
The use of Bayesian approaches in the regulated world of pharmaceutical drug development has not been without its difficulties or its critics. The recent Food and Drug Administration regulatory guidance on the use of Bayesian approaches in device submissions has mandated an investigation into the operating characteristics of Bayesian approaches and has suggested how to make adjustments in order that the proposed approaches are in a sense calibrated. In this paper, I present examples of frequentist calibration of Bayesian procedures and argue that we need not necessarily aim for perfect calibration but should be allowed to use procedures, which are well‐calibrated, a position supported by the guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
In recent years, global collaboration has become a conventional strategy for new drug development. To accelerate the development process and shorten approval time, the design of multi-regional clinical trials (MRCTs) incorporates subjects from many countries/regions around the world under the same protocol. After showing the overall efficacy of a drug in a global trial, one can also simultaneously evaluate the possibility of applying the overall trial results to all regions and subsequently support drug registration in each region. However, most of the recent approaches developed for the design and evaluation of MRCTs focus on establishing criteria to examine whether the overall results from the MRCT can be applied to a specific region. In this paper, we use the consistency criterion of Method 1 from the Japanese Ministry of Health, Labour and Welfare (MHLW) guidance to assess whether the overall results from the MRCT can be applied to all regions. Sample size determination for the MRCT is also provided to take all the consistency criteria from each individual region into account. Numerical examples are given to illustrate applications of the proposed approach.  相似文献   

3.
In early clinical development of new medicines, a single‐arm study with a limited number of patients is often used to provide a preliminary assessment of a response rate. A multi‐stage design may be indicated, especially when the first stage should only include very few patients so as to enable rapid identification of an ineffective drug. We used decision rules based on several types of nominal confidence intervals to evaluate a three‐stage design for a study that includes at most 30 patients. For each decision rule, we used exact binomial calculations to determine the probability of continuing to further stages as well as to evaluate Type I and Type II error rates. Examples are provided to illustrate the methods for evaluating alternative decision rules and to provide guidance on how to extend the methods to situations with modifications to the number of stages or number of patients per stage in the study design. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
CVX‐based numerical algorithms are widely and freely available for solving convex optimization problems but their applications to solve optimal design problems are limited. Using the CVX programs in MATLAB, we demonstrate their utility and flexibility over traditional algorithms in statistics for finding different types of optimal approximate designs under a convex criterion for nonlinear models. They are generally fast and easy to implement for any model and any convex optimality criterion. We derive theoretical properties of the algorithms and use them to generate new A‐, c‐, D‐ and E‐optimal designs for various nonlinear models, including multi‐stage and multi‐objective optimal designs. We report properties of the optimal designs and provide sample CVX program codes for some of our examples that users can amend to find tailored optimal designs for their problems. The Canadian Journal of Statistics 47: 374–391; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
With the rapid growth of modern technology, many biomedical studies are being conducted to collect massive datasets with volumes of multi‐modality imaging, genetic, neurocognitive and clinical information from increasingly large cohorts. Simultaneously extracting and integrating rich and diverse heterogeneous information in neuroimaging and/or genomics from these big datasets could transform our understanding of how genetic variants impact brain structure and function, cognitive function and brain‐related disease risk across the lifespan. Such understanding is critical for diagnosis, prevention and treatment of numerous complex brain‐related disorders (e.g., schizophrenia and Alzheimer's disease). However, the development of analytical methods for the joint analysis of both high‐dimensional imaging phenotypes and high‐dimensional genetic data, a big data squared (BD2) problem, presents major computational and theoretical challenges for existing analytical methods. Besides the high‐dimensional nature of BD2, various neuroimaging measures often exhibit strong spatial smoothness and dependence and genetic markers may have a natural dependence structure arising from linkage disequilibrium. We review some recent developments of various statistical techniques for imaging genetics, including massive univariate and voxel‐wise approaches, reduced rank regression, mixture models and group sparse multi‐task regression. By doing so, we hope that this review may encourage others in the statistical community to enter into this new and exciting field of research. The Canadian Journal of Statistics 47: 108–131; 2019 © 2019 Statistical Society of Canada  相似文献   

6.
In this article, two new approaches are introduced to design attributes single plans, and the corresponding models are constructed separately. For Approach I, an algorithm is proposed to design sampling plans by setting a goal function to fulfill the two-point conditions on the operating characteristic curve. For Approach II, the plan parameters are solved by a nonlinear optimization model which minimizes the integration of the probability of acceptance in the interval from the producer's risk quality to the consumer's risk quality. Then numerical examples and discussions based on numerical computation results are given to illustrate the approaches, and tables of the designed plans under various conditions are provided. Moreover, a fact is given to be proved that there is a relation between the conventional design and the new approaches.  相似文献   

7.
We present some lower bounds for the probability of zero for the class of count distributions having a log‐convex probability generating function, which includes compound and mixed‐Poisson distributions. These lower bounds allow the construction of new non‐parametric estimators of the number of unobserved zeros, which are useful for capture‐recapture models, or in areas like epidemiology and literary style analysis. Some of these bounds also lead to the well‐known Chao's and Turing's estimators. Several examples of application are analysed and discussed.  相似文献   

8.
Since the early 1990s, average bioequivalence (ABE) studies have served as the international regulatory standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile when used in the marketplace. Population (PBE) and individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980s and since their use was proposed in United States Food and Drug Administration guidance in 1997. Guidance has since been proposed and finalized by the Food and Drug Administration for the implementation of such techniques in the pioneer and generic pharmaceutical industries. The current guidance calls for the use of replicate design and of cross‐over studies (cross‐overs with sequences TRTR, RTRT, where T is the test and R is the reference formulation) for selected drug products, and proposes restricted maximum likelihood and method‐of‐moments techniques for parameter estimation. In general, marketplace access will be granted if the products demonstrate ABE based on a restricted maximum likelihood model. Study sponsors have the option of using PBE or IBE if the use of these criteria can be justified to the regulatory authority. Novel and previously proposed SAS®‐based approaches to the modelling of pharmacokinetic data from replicate design studies will be summarized. Restricted maximum likelihood and method‐of‐moments modelling results are compared and contrasted based on the analysis of data available from previously performed replicate design studies, and practical issues involved in the application of replicate designs to demonstrate ABE are characterized. It is concluded that replicate designs may be used effectively to demonstrate ABE for highly variable drug products. Statisticians should exercise caution in the choice of modelling procedure. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Background: Inferentially seamless studies are one of the best‐known adaptive trial designs. Statistical inference for these studies is a well‐studied problem. Regulatory guidance suggests that statistical issues associated with study conduct are not as well understood. Some of these issues are caused by the need for early pre‐specification of the phase III design and the absence of sponsor access to unblinded data. Before statisticians decide to choose a seamless IIb/III design for their programme, they should consider whether these pitfalls will be an issue for their programme. Methods: We consider four case studies. Each design met with varying degrees of success. We explore the reasons for this variation to identify characteristics of drug development programmes that lend themselves well to inferentially seamless trials and other characteristics that warn of difficulties. Results: Seamless studies require increased upfront investment and planning to enable the phase III design to be specified at the outset of phase II. Pivotal, inferentially seamless studies are unlikely to allow meaningful sponsor access to unblinded data before study completion. This limits a sponsor's ability to reflect new information in the phase III portion. Conclusions: When few clinical data have been gathered about a drug, phase II data will answer many unresolved questions. Committing to phase III plans and study designs before phase II begins introduces extra risk to drug development. However, seamless pivotal studies may be an attractive option when the clinical setting and development programme allow, for example, when revisiting dose selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
This study considers the detection of treatment‐by‐subset interactions in a stratified, randomised clinical trial with a binary‐response variable. The focus lies on the detection of qualitative interactions. In addition, the presented method is useful more generally, as it can assess the inconsistency of the treatment effects among strata by using an a priori‐defined inconsistency margin. The methodology presented is based on the construction of ratios of treatment effects. In addition to multiplicity‐adjusted p‐values, simultaneous confidence intervals are recommended to use in detecting the source and the amount of a potential qualitative interaction. The proposed method is demonstrated on a multi‐regional trial using the open‐source statistical software R . Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
Mood's test, which is a relatively old test (and the oldest non‐parametric test among those tests in its class) for determining heterogeneity of variance, is still being widely used in different areas such as biometry, biostatistics and medicine. Although it is a popular test, it is not suitable for use on a two‐way factorial design. In this paper, Mood's test is generalised to the 2 × 2 factorial design setting and its performance is compared with that of Klotz's test. The power and robustness of these tests are examined in detail by means of a simulation study with 10,000 replications. Based on the simulation results, the generalised Mood's and Klotz's tests can especially be recommended in settings in which the parent distribution is symmetric. As an example application we analyse data from a multi‐factor agricultural system that involves chilli peppers, nematodes and yellow nutsedge. This example dataset suggests that the performance of the generalised Mood test is in agreement with that of the generalised Klotz's test.  相似文献   

15.
Biomarkers that predict efficacy and safety for a given drug therapy become increasingly important for treatment strategy and drug evaluation in personalized medicine. Methodology for appropriately identifying and validating such biomarkers is critically needed, although it is very challenging to develop, especially in trials of terminal diseases with survival endpoints. The marker‐by‐treatment predictiveness curve serves this need by visualizing the treatment effect on survival as a function of biomarker for each treatment. In this article, we propose the weighted predictiveness curve (WPC). Based on the nature of the data, it generates predictiveness curves by utilizing either parametric or nonparametric approaches. Especially for nonparametric predictiveness curves, by incorporating local assessment techniques, it requires minimum model assumptions and provides great flexibility to visualize the marker‐by‐treatment relationship. WPC can be used to compare biomarkers and identify the one with the highest potential impact. Equally important, by simultaneously viewing several treatment‐specific predictiveness curves across the biomarker range, WPC can also guide the biomarker‐based treatment regimens. Simulations representing various scenarios are employed to evaluate the performance of WPC. Application on a well‐known liver cirrhosis trial sheds new light on the data and leads to discovery of novel patterns of treatment biomarker interactions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Pharmaceutical companies and manufacturers of food products are legally required to label the product's shelf‐life on the packaging. For pharmaceutical products the requirements for how to determine the shelf‐life are highly regulated. However, the regulatory documents do not specifically define the shelf‐life. Instead, the definition is implied through the estimation procedure. In this paper, the focus is on the situation where multiple batches are used to determine a label shelf‐life that is applicable to all future batches. Consequently, the short‐comings of existing estimation approaches are discussed. These are then addressed by proposing a new definition of shelf‐life and label shelf‐life, where greater emphasis is placed on within and between batch variability. Furthermore, an estimation approach is developed and the properties of this approach are illustrated using a simulation study. Finally, the approach is applied to real data.  相似文献   

17.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

18.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

19.
During a new drug development process, it is desirable to timely detect potential safety signals. For this purpose, repeated meta‐analyses may be performed sequentially on accumulating safety data. Moreover, if the amount of safety data from the originally planned program is not enough to ensure adequate power to test a specific hypothesis (e.g., the noninferiority hypothesis of an event of interest), the total sample size may be increased by adding new studies to the program. Without appropriate adjustment, it is well known that the type I error rate will be inflated because of repeated analyses and sample size adjustment. In this paper, we discuss potential issues associated with adaptive and repeated cumulative meta‐analyses of safety data conducted during a drug development process. We consider both frequentist and Bayesian approaches. A new drug development example is used to demonstrate the application of the methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Linear mixed models are regularly applied to animal and plant breeding data to evaluate genetic potential. Residual maximum likelihood (REML) is the preferred method for estimating variance parameters associated with this type of model. Typically an iterative algorithm is required for the estimation of variance parameters. Two algorithms which can be used for this purpose are the expectation‐maximisation (EM) algorithm and the parameter expanded EM (PX‐EM) algorithm. Both, particularly the EM algorithm, can be slow to converge when compared to a Newton‐Raphson type scheme such as the average information (AI) algorithm. The EM and PX‐EM algorithms require specification of the complete data, including the incomplete and missing data. We consider a new incomplete data specification based on a conditional derivation of REML. We illustrate the use of the resulting new algorithm through two examples: a sire model for lamb weight data and a balanced incomplete block soybean variety trial. In the cases where the AI algorithm failed, a REML PX‐EM based on the new incomplete data specification converged in 28% to 30% fewer iterations than the alternative REML PX‐EM specification. For the soybean example a REML EM algorithm using the new specification converged in fewer iterations than the current standard specification of a REML PX‐EM algorithm. The new specification integrates linear mixed models, Henderson's mixed model equations, REML and the REML EM algorithm into a cohesive framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号