首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
This paper illustrates how the design and statistical analysis of the primary endpoint of a proof‐of‐concept study can be formulated within a Bayesian framework and is motivated by and illustrated with a Pfizer case study in chronic kidney disease. It is shown how decision criteria for success can be formulated, and how the study design can be assessed in relation to these, both using the traditional approach of probability of success conditional on the true treatment difference and also using Bayesian assurance and pre‐posterior probabilities. The case study illustrates how an informative prior on placebo response can have a dramatic effect in reducing sample size, saving time and resource, and we argue that in some cases, it can be considered unethical not to include relevant literature data in this way. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This paper describes how a multistage analysis strategy for a clinical trial can assess a sequence of hypotheses that pertain to successively more stringent criteria for excess risk exclusion or superiority for a primary endpoint with a low event rate. The criteria for assessment can correspond to excess risk of an adverse event or to a guideline for sufficient efficacy as in the case of vaccine trials. The proposed strategy is implemented through a set of interim analyses, and success for one or more of the less stringent criteria at an interim analysis can be the basis for a regulatory submission, whereas the clinical trial continues to accumulate information to address the more stringent, but not futile, criteria. Simulations show that the proposed strategy is satisfactory for control of type I error, sufficient power, and potential success at interim analyses when the true relative risk is more favorable than assumed for the planned sample size. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In an environment where (i) potential risks to subjects participating in clinical studies need to be managed carefully, (ii) trial costs are increasing, and (iii) there are limited research resources available, it is necessary to prioritize research projects and sometimes re-prioritize if early indications suggest that a trial has low probability of success. Futility designs allow this re-prioritization to take place. This paper reviews a number of possible futility methods available and presents a case study from a late-phase study of an HIV therapeutic, which utilized conditional power-based stopping thresholds. The two most challenging aspects of incorporating a futility interim analysis into a trial design are the selection of optimal stopping thresholds and the timing of the analysis, both of which require the balancing of various risks. The paper outlines a number of graphical aids that proved useful in explaining the statistical risks involved to the study team. Further, the paper outlines a decision analysis undertaken which combined expectations of drug performance with conditional power calculations in order to produce probabilities of different interim and final outcomes, and which ultimately led to the selection of the final stopping thresholds.  相似文献   

6.
Vaccine experiments with a binary outcome typically use a small number of animals for financial and ethical reasons. The choice of a design, characterized by the total number of animals and the allocation of animals to treated and control groups, needs to be based on an assessment of change in expected size and power, with corresponding changes in the nominal significance level. This paper shows how an analysis of the conditional and the expected size and power of the Fisher-exact test, given predicted values for the proportions of success in control and treated groups, can lead to appropriate decision rules.  相似文献   

7.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

8.
Predictive criteria, including the adjusted squared multiple correlation coefficient, the adjusted concordance correlation coefficient, and the predictive error sum of squares, are available for model selection in the linear mixed model. These criteria all involve some sort of comparison of observed values and predicted values, adjusted for the complexity of the model. The predicted values can be conditional on the random effects or marginal, i.e., based on averages over the random effects. These criteria have not been investigated for model selection success.

We used simulations to investigate selection success rates for several versions of these predictive criteria as well as several versions of Akaike's information criterion and the Bayesian information criterion, and the pseudo F-test. The simulations involved the simple scenario of selection of a fixed parameter when the covariance structure is known.

Several variance–covariance structures were used. For compound symmetry structures, higher success rates for the predictive criteria were obtained when marginal rather than conditional predicted values were used. Information criteria had higher success rates when a certain term (normally left out in SAS MIXED computations) was included in the criteria. Various penalty functions were used in the information criteria, but these had little effect on success rates. The pseudo F-test performed as expected. For the autoregressive with random effects structure, the results were the same except that success rates were higher for the conditional version of the predictive error sum of squares.

Characteristics of the data, such as the covariance structure, parameter values, and sample size, greatly impacted performance of various model selection criteria. No one criterion was consistently better than the others.  相似文献   

9.
The European Agency for the Evaluation of Medicinal Products has recently completed the consultation of a draft guidance on how to implement conditional approval. This route of application is available for orphan drugs, emergency situations and serious debilitating or life-threatening diseases. Although there has been limited experience in implementing conditional approval to date, PSI (Statisticians in the Pharmaceutical Industry) sponsored a meeting of pharmaceutical statisticians with an interest in the area to discuss potential issues. This article outlines the issues raised and resulting discussions, based on the group's interpretation of the legislation. Conditional approval seems to fit well with the accepted regulatory strategy in HIV. In oncology, conditional approval may be most likely when (a) compelling phase II data are available using accepted clinical outcomes (e.g. progression/recurrence-free survival or overall survival) and Phase III has been planned or started, or (b) when data are available using a surrogate endpoint for clinical outcome (e.g. response rate or biochemical measures) from a single-arm study in rare tumours with high response, compared with historical data. The use of interim analyses in Phase III for supporting conditional approval raises some challenging issues regarding dissemination of information, maintenance of blinding, potential introduction of bias, ethics, switching, etc.  相似文献   

10.
This article estimates and tests the smooth ambiguity model of Klibanoff, Marinacci, and Mukerji based on stock market data. We introduce a novel methodology to estimate the conditional expectation, which characterizes the impact of a decision maker’s ambiguity attitude on asset prices. Our point estimates of the ambiguity parameter are between 25 and 60, whereas our risk aversion estimates are considerably lower. The substantial difference indicates that market participants are ambiguity averse. Furthermore, we evaluate if ambiguity aversion helps explaining the cross-section of expected returns. Compared with Epstein and Zin preferences, we find that incorporating ambiguity into the decision model improves the fit to the data while keeping relative risk aversion at more reasonable levels. Supplementary materials for this article are available online.  相似文献   

11.
A new method of modeling coronary artery calcium (CAC) is needed in order to properly understand the probability of onset and growth of CAC. CAC remains a controversial indicator of cardiovascular disease (CVD) risk, but this may be due to ill-equipped methods of specifying CAC during the analysis phase of studies reporting an analysis where CAC is the primary outcome. The modern method of two-part latent growth modeling may represent a strong alternative to the myriad of existing methods for modeling CAC. We provide a brief overview of existing methods of analysis used for CAC before introducing the general latent growth curve model, how it extends into a two-part (semicontinuous) growth model, and how the ubiquitous problem of missing data can be effectively handled. We then present an example of how to model CAC using this framework. We demonstrate that utilizing this type of modeling strategy can result in traditional predictors of CAC (e.g. age, gender, and high-density lipoprotein cholesterol), exerting a different impact on the two different, yet simultaneous, operationalizations of CAC. This method of analyzing CAC could inform future analyses of CAC and inform subsequent discussions about the nature of its potential to inform long-term CVD risk and heart events.  相似文献   

12.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

13.
Risk management of stock portfolios is a fundamental problem for the financial analysis since it indicates the potential losses of an investment at any given time. The objective of this study is to use bivariate static conditional copulas to quantify the dependence structure and to estimate the risk measure Value-at-Risk (VaR). There were selected stocks that have been performing outstandingly on the Brazilian Stock Exchange to compose pairs trading portfolios (B3, Gerdau, Magazine Luiza, and Petrobras). Due to the flexibility that this methodology offers in the construction of multivariate distributions and risk aggregation in finance, we used the copula-APARCH approach with the Normal, T-student, and Joe-Clayton copula functions. In most scenarios, the results showed a pattern of dependence at the extremes. Moreover, the copula form seems not to be relevant for VaR estimation, since in most portfolios the appropriate copulas lead to significant VaR estimates. It has found that the best models fitted provided conservative risk measures, estimates at 5% and 1%, in a scenario more aggressive.  相似文献   

14.
Factors affecting dispersal and recruitment in animal populations will play a prominent role in the dynamics of populations. This is particularly the case for subdivided populations where the dispersal of individuals among patches may lead to local extinction and 'rescue effects'. A long-term observational study carried out in Brittany, France, and involving colour-ringed Black-legged Kittiwakes (Rissa tridactyla) suggested that the reproductive success of conspecifics (or some social correlate) could be one important factor likely to affect dispersal and recruitment. By dispersing from patches where the local reproductive success was low and recruiting to patches where the local reproductive success was high, individual birds could track spatio-temporal variations in the quality of breeding patches (the quality of breeding patches can be affected by different factors, such as food availability, the presence of predators or ectoparasites, which can vary in space and time at different scales). Such an observational study may nevertheless have confounded the role of conspecific reproductive success with the effect of a correlated factor (e.g. the local activities of a predator). In other words, individuals may have been influenced directly by the factor responsible for the low local reproductive success or indirectly by the low success of their neighbours. Thus, an experimental approach was needed to address this question. Estimates of demographic parameters (other than reproductive success) and studies of the response of marked individuals to changes in their environment usually face problems associated with variability in the probability of detecting individuals and with nonindependence among events occurring on a local scale. Further, very few studies on dispersal have attempted to address the causal nature of relationships by experimentally manipulating factors. Here we present an experiment designed to test for an effect of local reproductive success of conspecifics on behavioural decisions of individuals regarding dispersal and recruitment. The experiment was carried out on Kittiwakes within a large seabird colony in northern Norway. It involved (i) the colour banding of several hundreds of birds; (ii) the manipulation (increase/decrease) of the local reproductive success of breeding groups on cliffpatches; and (iii) the detailed survey of attendance and activities of birds on these patches. It also involved the manipulation of the nest content of marked individuals breeding within these patches (individuals failing at the egg stage were expected to respond in terms of dispersal to the success of their neighbours). This allowed us to test whether a lower local reproductive success would lower (1) the attendance of breeders at the end of the breeding season; (2) the presence of prospecting birds; and (3) the proportion of failed breeders that came back to breed on the same patch the year after. In this paper, we discuss how we dealt with (I) the use of return rates to infer differences in dispersal rates; (II) the trade-off between sample sizes and local treatment levels; and (III) potential differences in detection probabilities among locations. We also present some results to illustrate the design and implementation of the experiment.  相似文献   

15.
In this paper, the design of reliability sampling plans for the Pareto lifetime model under progressive Type-II right censoring is considered. Sampling plans are derived using the decision theoretic approach with a suitable loss or cost function that consists of sampling cost, rejection cost, and acceptance cost. The decision rule is based on the estimated reliability function. Plans are constructed within the Bayesian context using the natural conjugate prior. Simulations for evaluating the Bayes risk are carried out and the optimal sampling plans are reported for various sample sizes, observed number of failures and removal probabilities.  相似文献   

16.
Considerable statistical research has been performed in recent years to develop sophisticated statistical methods for handling missing data and dropouts in the analysis of clinical trial data. However, if statisticians and other study team members proactively set out at the trial initiation stage to assess the impact of missing data and investigate ways to reduce dropouts, there is considerable potential to improve the clarity and quality of trial results and also increase efficiency. This paper presents a Human Immunodeficiency Virus (HIV) case study where statisticians led a project to reduce dropouts. The first step was to perform a pooled analysis of past HIV trials investigating which patient subgroups are more likely to drop out. The second step was to educate internal and external trial staff at all levels about the patient types more likely to dropout, and the impact this has on data quality and sample sizes required. The final step was to work collaboratively with clinical trial teams to create proactive plans regarding focused retention efforts, identifying ways to increase retention particularly in patients most at risk. It is acknowledged that identifying the specific impact of new patient retention efforts/tools is difficult because patient retention can be influenced by overall study design, investigational product tolerability profile, current standard of care and treatment access for the disease under study, which may vary over time. However, the implementation of new retention strategies and efforts within clinical trial teams attests to the influence of the analyses described in this case study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
To assess the value of a continuous marker in predicting the risk of a disease, a graphical tool called the predictiveness curve has been proposed. It characterizes the marker's predictiveness, or capacity to risk stratify the population by displaying the distribution of risk endowed by the marker. Methods for making inference about the curve and for comparing curves in a general population have been developed. However, knowledge about a marker's performance in the general population only is not enough. Since a marker's effect on the risk model and its distribution can both differ across subpopulations, its predictiveness may vary when applied to different subpopulations. Moreover, information about the predictiveness of a marker conditional on baseline covariates is valuable for individual decision making about having the marker measured or not. Therefore, to fully realize the usefulness of a risk prediction marker, it is important to study its performance conditional on covariates. In this article, we propose semiparametric methods for estimating covariate-specific predictiveness curves for a continuous marker. Unmatched and matched case-control study designs are accommodated. We illustrate application of the methodology by evaluating serum creatinine as a predictor of risk of renal artery stenosis.  相似文献   

19.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

20.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号