首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14篇
  免费   1篇
统计学   15篇
  2022年   2篇
  2021年   1篇
  2019年   4篇
  2018年   1篇
  2014年   2篇
  2013年   1篇
  2011年   1篇
  2009年   1篇
  2004年   2篇
排序方式: 共有15条查询结果,搜索用时 178 毫秒
1.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
2.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
3.
4.
Baseline adjusted analyses are commonly encountered in practice, and regulatory guidelines endorse this practice. Sample size calculations for this kind of analyses require knowledge of the magnitude of nuisance parameters that are usually not given when the results of clinical trials are reported in the literature. It is therefore quite natural to start with a preliminary calculated sample size based on the sparse information available in the planning phase and to re‐estimate the value of the nuisance parameters (and with it the sample size) when a portion of the planned number of patients have completed the study. We investigate the characteristics of this internal pilot study design when an analysis of covariance with normally distributed outcome and one random covariate is applied. For this purpose we first assess the accuracy of four approximate sample size formulae within the fixed sample size design. Then the performance of the recalculation procedure with respect to its actual Type I error rate and power characteristics is examined. The results of simulation studies show that this approach has favorable properties with respect to the Type I error rate and power. Together with its simplicity, these features should make it attractive for practical application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
5.
Safety analyses of adverse events (AEs) are important in assessing benefit–risk of therapies but are often rather simplistic compared to efficacy analyses. AE probabilities are typically estimated by incidence proportions, sometimes incidence densities or Kaplan–Meier estimation are proposed. These analyses either do not account for censoring, rely on a too restrictive parametric model, or ignore competing events. With the non-parametric Aalen-Johansen estimator as the “gold standard”, that is, reference estimator, potential sources of bias are investigated in an example from oncology and in simulations, for both one-sample and two-sample scenarios. The Aalen-Johansen estimator serves as a reference, because it is the proper non-parametric generalization of the Kaplan–Meier estimator to multiple outcomes. Because of potential large variances at the end of follow-up, comparisons also consider further quantiles of the observed times. To date, consequences for safety comparisons have hardly been investigated, the impact of using different estimators for group comparisons being unclear. For example, the ratio of two both underestimating or overestimating estimators may not be comparable to the ratio of the reference, and our investigation also considers the ratio of AE probabilities. We find that ignoring competing events is more of a problem than falsely assuming constant hazards by the use of the incidence density and that the choice of the AE probability estimator is crucial for group comparisons.  相似文献   
6.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   
7.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
8.
The recently published Committee for Medicinal Products for Human Use reflection paper on flexible designs highlights a controversial issue regarding the interpretation of adaptive trials. The guideline suggests that a test for heterogeneity should be preplanned and if treatment effect estimates differ significantly between design stages then data collected before and after the interim analysis might not be combined in a formal analysis. In this paper we investigate error rates for such a procedure in the presence of calendar-time effects. Furthermore, we present an alternative testing strategy based on change point methods. In a simulation study we demonstrate that our procedure performs well in comparison to that suggested by the guideline.  相似文献   
9.
AStA Advances in Statistical Analysis - A pandemic poses particular challenges to decision-making because of the need to continuously adapt decisions to rapidly changing evidence and available...  相似文献   
10.
The analysis of adverse events (AEs) is a key component in the assessment of a drug's safety profile. Inappropriate analysis methods may result in misleading conclusions about a therapy's safety and consequently its benefit‐risk ratio. The statistical analysis of AEs is complicated by the fact that the follow‐up times can vary between the patients included in a clinical trial. This paper takes as its focus the analysis of AE data in the presence of varying follow‐up times within the benefit assessment of therapeutic interventions. Instead of approaching this issue directly and solely from an analysis point of view, we first discuss what should be estimated in the context of safety data, leading to the concept of estimands. Although the current discussion on estimands is mainly related to efficacy evaluation, the concept is applicable to safety endpoints as well. Within the framework of estimands, we present statistical methods for analysing AEs with the focus being on the time to the occurrence of the first AE of a specific type. We give recommendations which estimators should be used for the estimands described. Furthermore, we state practical implications of the analysis of AEs in clinical trials and give an overview of examples across different indications. We also provide a review of current practices of health technology assessment (HTA) agencies with respect to the evaluation of safety data. Finally, we describe problems with meta‐analyses of AE data and sketch possible solutions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号