首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 731 毫秒
1.
2.
Predictive enrichment strategies use biomarkers to selectively enroll oncology patients into clinical trials to more efficiently demonstrate therapeutic benefit. Because the enriched population differs from the patient population eligible for screening with the biomarker assay, there is potential for bias when estimating clinical utility for the screening eligible population if the selection process is ignored. We write estimators of clinical utility as integrals averaging regression model predictions over the conditional distribution of the biomarker scores defined by the assay cutoff and discuss the conditions under which consistent estimation can be achieved while accounting for some nuances that may arise as the biomarker assay progresses toward a companion diagnostic. We outline and implement a Bayesian approach in estimating these clinical utility measures and use simulations to illustrate performance and the potential biases when estimation naively ignores enrichment. Results suggest that the proposed integral representation of clinical utility in combination with Bayesian methods provide a practical strategy to facilitate cutoff decision‐making in this setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
In randomized clinical trials, it is often necessary to demonstrate that a new medical treatment does not substantially differ from a standard reference treatment. Formal testing of such ‘equivalence hypotheses’ is typically done by combining two one‐sided tests (TOST). A quite different strand of research has demonstrated that replacing nuisance parameters with a null estimate produces P‐values that are close to exact ( Lloyd 2008a ) and that maximizing over the residual dependence on the nuisance parameter produces P‐values that are exact and optimal within a class ( Röhmel & Mansmann 1999 ; Lloyd 2008a ). The three procedures – TOST, estimation and maximization of a nuisance parameter – can each be expressed as a transformation of an approximate P‐value. In this paper, we point out that TOST‐based P‐values will generally be conservative, even if based on exact and optimal one‐sided tests. This conservatism is avoided by applying the three transforms in a certain order – estimation followed by TOST followed by maximization. We compare this procedure with existing alternatives through a numerical study of binary matched pairs where the two treatments are compared by the difference of response rates. The resulting tests are uniformly more powerful than the considered competitors, although the difference in power can range from very small to moderate.  相似文献   

4.
A RANDOMIZED LONGITUDINAL PLAY-THE-WINNER DESIGN FOR REPEATED BINARY DATA   总被引:1,自引:0,他引:1  
In some clinical trials with two treatment arms, the patients enter the study at different times and are then allocated to one of two treatment groups. It is important for ethical reasons that there is greater probability of allocating a patient to the group that has displayed more favourable responses up to the patient's entry time. There are many adaptive designs in the literature to meet this ethical constraint, but most have a single binary response. Often the binary response is longitudinal in nature, being observed repeatedly over different monitoring times. This paper develops a randomized longitudinal play‐the‐winner design for such binary responses which meets the ethical constraint. Some performance characteristics of this design have been studied. It has been implemented in a trial of pulsed electro‐magnetic field therapy with rheumatoid arthritis patients.  相似文献   

5.
This is a teaching paper intended as a quick introduction to some of the key concepts in the design of pharmaceutical trials for those new to the industry. Since the first ‘modern’ randomized clinical trial was reported in 1948 by the Medical Research Council, clinical trials have become a central component in the assessment of new therapies. The primary objective of any clinical trial is to obtain an unbiased and reliable assessment of a given regimen response independent of any known or unknown prognostic factors. The essential principals of trial design can be thought of as the ABC of allocation at random, blinding and controlled. In addition, the three Rs of endpoint selection – representative, reliability and reproducibility – are relevant in this context. This paper briefly describes the basic concepts for clinical trials and highlights possible points to consider. It draws heavily on the principles highlighted in the International Conference on Harmonisation guidelines as well as recommendations in the CONSORT statement. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
To evaluate the clinical utility of new risk markers, a crucial step is to measure their predictive accuracy with prospective studies. However, it is often infeasible to obtain marker values for all study participants. The nested case-control (NCC) design is a useful cost-effective strategy for such settings. Under the NCC design, markers are only ascertained for cases and a fraction of controls sampled randomly from the risk sets. The outcome dependent sampling generates a complex data structure and therefore a challenge for analysis. Existing methods for analyzing NCC studies focus primarily on association measures. Here, we propose a class of non-parametric estimators for commonly used accuracy measures. We derived asymptotic expansions for accuracy estimators based on both finite population and Bernoulli sampling and established asymptotic equivalence between the two. Simulation results suggest that the proposed procedures perform well in finite samples. The new procedures were illustrated with data from the Framingham Offspring study.  相似文献   

7.
Case‐cohort design has been demonstrated to be an economical and efficient approach in large cohort studies when the measurement of some covariates on all individuals is expensive. Various methods have been proposed for case‐cohort data when the dimension of covariates is smaller than sample size. However, limited work has been done for high‐dimensional case‐cohort data which are frequently collected in large epidemiological studies. In this paper, we propose a variable screening method for ultrahigh‐dimensional case‐cohort data under the framework of proportional model, which allows the covariate dimension increases with sample size at exponential rate. Our procedure enjoys the sure screening property and the ranking consistency under some mild regularity conditions. We further extend this method to an iterative version to handle the scenarios where some covariates are jointly important but are marginally unrelated or weakly correlated to the response. The finite sample performance of the proposed procedure is evaluated via both simulation studies and an application to a real data from the breast cancer study.  相似文献   

8.
Abstract. To increase the predictive abilities of several plasma biomarkers on the coronary artery disease (CAD)‐related vital statuses over time, our research interest mainly focuses on seeking combinations of these biomarkers with the highest time‐dependent receiver operating characteristic curves. An extended generalized linear model (EGLM) with time‐varying coefficients and an unknown bivariate link function is used to characterize the conditional distribution of time to CAD‐related death. Based on censored survival data, two non‐parametric procedures are proposed to estimate the optimal composite markers, linear predictors in the EGLM model. Estimation methods for the classification accuracies of the optimal composite markers are also proposed. In the article we establish theoretical results of the estimators and examine the corresponding finite‐sample properties through a series of simulations with different sample sizes, censoring rates and censoring mechanisms. Our optimization procedures and estimators are further shown to be useful through an application to a prospective cohort study of patients undergoing angiography.  相似文献   

9.
Summary.  The instigation of mass screening for breast cancer has, over the last three decades, raised various statistical issues and led to the development of new statistical approaches. Initially, the design of screening trials was the main focus of research but, as the evidence in favour of population-based screening programmes mounts, a variety of other applications have also been identified. These include administrative and quality control tasks, for monitoring routine screening services, as well as epidemiological modelling of incidence and mortality. We review the commonly used methods of cancer screening evaluation, highlight some current issues in breast screening and, using examples from randomized trials and established screening programmes, illustrate the role that statistical science has played in the development of clinical research in this field.  相似文献   

10.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

12.
A method is proposed for block randomization of treatments to experimental units that can accommodate both multiple quantitative blocking variables and unbalanced designs. Hierarchical clustering in conjunction with leaf‐order optimization is used to block experimental units in multivariate space. The method is illustrated in the context of a diabetic mouse assay. A simulation study is presented to explore the utility of the proposed randomization method relative to that of a completely randomized approach, both in the presence and absence of covariate adjustment. An example R function is provided to illustrate the implementation of the method. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
The UK body of statisticians in the pharmaceutical industry, PSI, has called on heads of European regulatory agencies responsible for assessing applications for marketing authorizations for new medicines in the EU to employ full time statisticians. In order to assess the present situation a survey has been conducted to identify the number of agencies employing one or more full time statisticians. Out of 29 responding agencies, 12 employed one or more statisticians on a full time basis, whereas 17 did not. Among these 17, 7 involved external experts on a regular basis, 5 involved external statisticians on a case‐by‐case basis, whereas 5 never involved external statistical expertise. Failure to involve statisticians in the assessment of efficacy and safety of medicines does not automatically lead to reports of low quality or invalid assessment of benefit‐risk. However, in depth knowledge of statistical methodology is often necessary to uncover weaknesses and potentially biased efficacy estimates. This might be of importance for the final opinion on granting a marketing authorization, and statistical review should therefore be conducted by those who are professionally expert in the area. A positive trend toward an increased involvement of statistical expertise in the European network of regulatory agencies is observed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Identifying important biomarkers that are predictive for cancer patients’ prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization. The recently developed variable screening methods, though powerful in many practical setting, fail to incorporate prior information on the importance of each biomarker and are less powerful in detecting marginally weak while jointly important signals. We propose a new conditional screening method for survival outcome data by computing the marginal contribution of each biomarker given priorily known biological information. This is based on the premise that some biomarkers are known to be associated with disease outcomes a priori. Our method possesses sure screening properties and a vanishing false selection rate. The utility of the proposal is further confirmed with extensive simulation studies and analysis of a diffuse large B-cell lymphoma dataset. We are pleased to dedicate this work to Jack Kalbfleisch, who has made instrumental contributions to the development of modern methods of analyzing survival data.  相似文献   

16.
Observational epidemiological studies are increasingly used in pharmaceutical research to evaluate the safety and effectiveness of medicines. Such studies can complement findings from randomized clinical trials by involving larger and more generalizable patient populations by accruing greater durations of follow-up and by representing what happens more typically in the clinical setting. However, the interpretation of exposure effects in observational studies is almost always complicated by non-random exposure allocation, which can result in confounding and potentially lead to misleading conclusions. Confounding occurs when an extraneous factor, related to both the exposure and the outcome of interest, partly or entirely explains the relationship observed between the study exposure and the outcome. Although randomization can eliminate confounding by distributing all such extraneous factors equally across the levels of a given exposure, methods for dealing with confounding in observational studies include a careful choice of study design and the possible use of advanced analytical methods. The aim of this paper is to introduce some of the approaches that can be used to help minimize the impact of confounding in observational research to the reader working in the pharmaceutical industry.  相似文献   

17.
A complication that may arise in some bioequivalence studies is that of ‘incomplete subject profiles’, caused by missing values that occur at one or more sampling points in the concentration–time curve for some study subjects. We assess the impact of incomplete subject profiles on the assessment of bioequivalence in a standard two‐period crossover design. The specific aim of the investigation is to assess the impact of four different patterns of missing concentration values on the coverage level of a 90% nominal two‐sided confidence interval for the ratio of geometric means and then to consider the impact on the probability of concluding bioequivalence. An overall conclusion from the results is that random missingness – that is, missingness for reasons unrelated to the bioavailability of the formulation involved or, more generally, to any aspect of the study design and conduct – has a damaging effect on the study conclusions only when the number of missing values is fairly large. On the other hand, a missingness pattern that potentially has a very damaging effect on the study conclusions is that which arises when values are missing ‘late’ in the concentration–time curve. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

18.
Often, single‐arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of ‘insufficient drug activity’ is rejected, the next step could be a randomised two‐arm trial. However, single‐arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single‐arm trials followed by randomised two‐arm trials with stand‐alone single‐stage or group sequential randomised two‐arm trials. Through this, we are able to investigate the utility of single‐arm trials and determine the most efficient drug development plans, setting our work in the context of a published single‐arm non‐small‐cell lung cancer trial. Reference priors, reflecting the opinions of ‘sceptical’ and ‘enthusiastic’ investigators, are used to quantify and guide the suitability of single‐arm trials in this setting. We observe that the explored development plans incorporating single‐arm trials are often non‐optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single‐arm trials avoided, for the non‐small‐cell lung cancer treatment through direct progression to a group sequential randomised two‐arm trial. Careful consideration should thus be given to the use of single‐arm trials in oncological drug development when a randomised trial will follow. Copyright © 2015 The Authors. Pharmaceutical Statistics published by JohnWiley & Sons Ltd.  相似文献   

19.
Regression with a circular response is a topic of current interest. We introduce non‐parametric smoothing for this problem. Simple adaptations of a weight function enable a unified formulation for both real‐line and circular predictors, whereas these cases have been tackled by quite distinct parametric methods. Additionally, we discuss various methodological extensions, obtaining a number of promising techniques – totally new in circular statistics – such as confidence intervals for the value of a circular regression and non‐parametric autoregression in circular time series. The findings are also illustrated through real data examples.  相似文献   

20.
In this paper we introduce two estimators of a population proportion when randomized response sampling with a normal randomizing distribution is used* The estimators have been obtained by using the method of moments. Both of the proposed estimators are shown to be more efficient than the corresponding estimators of Eranklin (1989 b).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号