首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role and value of statistical contributions in drug development up to the point of health authority approval are well understood. But health authority approval is only a true ‘win’ if the evidence enables access and adoption into clinical practice. In today's complex and evolving healthcare environment, there is additional strategic evidence generation, communication, and decision support that can benefit from statistical contributions. In this article, we describe the history of medical affairs in the context of drug development, the factors driving post-approval evidence generation needs, and the opportunities for statisticians to optimize evidence generation for stakeholders beyond health authorities in order to ensure that new medicines reach appropriate patients.  相似文献   

2.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

3.
Graphics are at the core of exploring and understanding data, communicating results and conclusions, and supporting decision‐making. Increasing our graphical expertise can significantly strengthen our impact as professional statisticians and quantitative scientists. In this article, we present a concerted effort to improve the way we create graphics at Novartis. We provide our vision and guiding principles, before describing seven work packages in more detail. The actions, principles, and experiences laid out in this paper are applicable generally, also beyond drug development, which is our field of work. The purpose of this article is to share our experiences and help foster the use of good graphs in pharmaceutical statistics and beyond. A Graphics Principles “Cheat Sheet” is available online at https://graphicsprinciples.github.io/ .  相似文献   

4.
The Committee for Medicinal Products for Human Use (CHMP) is currently preparing a guideline on 'methodological issues in confirmatory clinical trials with flexible design and analysis plan'. PSI (Statisticians in the Pharmaceutical Industry) sponsored a meeting of pharmaceutical statisticians with an interest in the area to share experiences and identify potential opportunities for adaptive designs in late-phase clinical drug development. This article outlines the issues raised, resulting discussions and consensus views reached. Adaptive designs have potential utility in late-phase clinical development. Sample size re-estimation seems to be valuable and widely accepted, but should be made independent of the observed treatment effect where possible. Where unblinding is necessary, careful consideration needs to be given to preserving the integrity of the trial. An area where adaptive designs can be particularly beneficial is to allow dose selection in pivotal trials via adding/dropping treatment arms; for example, combining phase II and III of the drug development program. The more adaptations made during a late-phase clinical trial, the less likely that the clinical trial would be considered as a confirmatory trial. In all cases it would be advisable to consult with regulatory agencies at the protocol design stage. All involved should remain open to scientifically valid opportunities to improve drug development.  相似文献   

5.
Summary.  In many areas of pharmaceutical research, there has been increasing use of categorical data and more specifically ordinal responses. In many cases, complex models are required to account for different types of dependences among the responses. The clinical trial that is considered here involved patients who were required to remain in a particular state to enable the doctors to examine their heart. The aim of this trial was to study the relationship between the dose of the drug administered and the time that was spent by the patient in the state permitting examination. The patient's state was measured every second by a continuous Doppler signal which was categorized by the doctors into one of four ordered categories. Hence, the response consisted of repeated ordinal series. These series were of different lengths because the drug effect wore off faster (or slower) on certain patients depending on the drug dose administered and the infusion rate, and therefore the length of drug administration. A general method for generating new ordinal distributions is presented which is sufficiently flexible to handle unbalanced ordinal repeated measurements. It consists of obtaining a cumulative mixture distribution from a Laplace transform and introducing into it the integrated intensity of a binary logistic, continuation ratio or proportional odds model. Then, a multivariate distribution is constructed by a procedure that is similar to the updating process of the Kalman filter. Several types of history dependences are proposed.  相似文献   

6.
In the context of frequentist inference there are strong arguments in favour of data reduction by both (a) conditioning on the most appropriate ancillary statistic and (b) restricting attention to a minimal sufficient statistic. However, significantly for the study of the foundations of frequentist inference, there are some examples in which the order of application of these data reductions has an important bearing on the statistical inference of interest. This paper presents a new simple example of this kind.  相似文献   

7.
At least two adequate and well-controlled clinical studies are usually required to support effectiveness of a certain treatment. In some circumstances, however, a single study providing strong results may be sufficient. Some statistical stability criteria for assessing whether a single study provides very persuasive results are known. A new criterion is introduced, and it is based on the conservative estimation of the reproducibility probability in addition to the possibility of performing statistical tests by referring directly to the reproducibility probability estimate. These stability criteria are compared numerically and conceptually. This work aims to help both regulatory agencies and pharmaceutical companies to decide if the results of a single study may be sufficient to establish effectiveness.  相似文献   

8.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   

11.
At present, there are situations in antibiotic drug development where the low number of enrollable patients with key problem pathogens makes it impossible to conduct fully powered non‐inferiority trials in the traditional way. Recent regulatory changes have begun to address this situation. In parallel, statistical issues regarding the application of alternative techniques, balancing the unmet need with the level of certainty in the approval process, and the use of additional sources of data are critical areas to increase development feasibility. Although such approaches increase uncertainty compared with a traditional development program, this will be necessary to allow new agents to be made available. Identification of these risks and explicit discussion around requirements in these areas should help clarify the situation, and hence, the feasibility of developing drugs to treat the most concerning pathogens before the unmet need becomes even more acute than at present. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Clinical trials in the era of precision cancer medicine aim to identify and validate biomarker signatures which can guide the assignment of individually optimal treatments to patients. In this article, we propose a group sequential randomized phase II design, which updates the biomarker signature as the trial goes on, utilizes enrichment strategies for patient selection, and uses Bayesian response-adaptive randomization for treatment assignment. To evaluate the performance of the new design, in addition to the commonly considered criteria of Type I error and power, we propose four new criteria measuring the benefits and losses for individuals both inside and outside of the clinical trial. Compared with designs with equal randomization, the proposed design gives trial participants a better chance to receive their personalized optimal treatments and thus results in a higher response rate on the trial. This design increases the chance to discover a successful new drug by an adaptive enrichment strategy, i.e. identification and selective enrollment of a subset of patients who are sensitive to the experimental therapies. Simulation studies demonstrate these advantages of the proposed design. It is illustrated by an example based on an actual clinical trial in non-small-cell lung cancer.  相似文献   

13.
In this work a new type of logits and odds ratios, which includes as special cases the continuation and the reverse-continuation logits and odds ratios, are defined. We prove that these logits and odds ratios define a parameterization of the joint probabilities of a two way contingency table. The problem of testing equality and inequality constraints on these logits and odds ratios is examined with particular regard to two new hypotheses of monotone dependence. Work partially supported by a MIUR2004 grant. Preliminary findings have been presented at SIS (Società Italiana di Statistica) Annual Meeting, Torino, 2006.  相似文献   

14.
Traditionally, the bioequivalence of a generic drug with the innovator's product is assessed by comparing their pharmacokinetic profiles determined from the blood or plasma concentration-time curves. This method may only be applicable to formulations where blood drug or metabolites levels adequately characterize absorption and metabolism. For non-systematic drugs categorized by the lack of systemic presence, such as metered dose inhalers (MDI), anti-ulcer agents and topical antifungals and vaginal antifungals, new definition of therapeutic equivalency and criteria for acceptance should be used. When pharmacologic effects of the drugs can be easily measured, pharmacodynamic effect studies can be used to assess the therapeutic equivalence of non-systemic drugs. When analytical methods or other tests cannot be developed to permit use of the pharmacodynamic method, clinical trials to compare one or several clinical endpoints may be the only suitable method to establishing therapeutic equivalence. In this paper we evaluate by Monte-Carlo simulations the fixed sample performances of some two one-sided tests procedures which may be used to assess the therapeutic equivalence of non-systemic drugs with binary clinical endpoints. Formulae of sample size determination for therapeutic equivalence clinical trials are also given.  相似文献   

15.
The practice of sequence alignment is constantly oscillating between the risk of overlooking important structure and that of discovering any arbitrarily defined kind of structure anywhere. On the other hand, the use of a condensed consensus sequence may lead to a substantial loss in valuable information. While adopting a Mahalanobis‐type index we allow for a certain degree of uncertainty in the measurements. This uncertainty may be caused by inaccurate measurements or ambiguity. In this paper, we test the similarity between DNA sequences within the framework of equivalence testing, accounting for both variances and covariances between frequencies of nucleotides. Statistical methods for testing equivalence were first developed in the context of pharmacokinetics and later extended to the field of clinical trials. Nowadays, (bio)equivalence tests seem to be less frequently used outside the drug testing field, including statistical genetics. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

17.
This article proposes a confidence interval procedure for an open question posted by Tamhane and Dunnett regarding the inference on the minimum effective dose of a drug for binary data. We use a partitioning approach in conjunction with a confidence interval procedure to provide a solution to this problem. Binary data frequently arise in medical investigations in connection with dichotomous outcomes such as the development of a disease or the efficacy of a drug. The proposed procedure not only detects the minimum effective dose of the drug, but also provides estimation information on the treatment effect of the closest ineffective dose. Such information benefits follow-up investigations in clinical trials. We prove that, when the confidence interval for the pairwise comparison has (or asymptotically controls) confidence level 1 ? α, the stepwise procedure strongly controls (or asymptotically controls) the familywise error rate at level α, which is a key criterion in dose finding. The new method is compared with other procedures in terms of power performance and coverage probability using simulations. The simulated results shed new light on the discernible features of the new procedure. An example on the investigation of acetaminophen is included.  相似文献   

18.
Abstract. We, as statisticians, are living in interesting times. New scientifically significant questions are waiting for our contributions, new data accumulate at a fast rate, and the rapid increase of computing power gives us unprecedented opportunities to meet these challenges. Yet, many members of our community are still turning the old wheel as if nothing dramatic had happened. There are ideas, methods and techniques which are commonly used but outdated and should be replaced by new ones. Can we expect to see, as has been suggested, a consolidation of statistical methodologies towards a new synthesis, or is perhaps an even wider separation and greater divergence the more likely scenario? In this talk these issues are discussed, and some conjectures and suggestions are made.  相似文献   

19.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

20.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号