首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ICH E9 guideline on Statistical Principles for Clinical Trials is a pivotal document for statisticians in clinical research in the pharmaceutical industry guiding, as it does, statistical aspects of the planning, conduct and analysis of regulatory clinical trials. New statisticians joining the industry require a thorough and lasting understanding of the 39-page guideline. Given the amount of detail to be covered, traditional (lecture-style) training methods are largely ineffective. Directed reading, perhaps in groups, may be a helpful approach, especially if experienced staff are involved in the discussions. However, as in many training scenarios, exercise-based training is often the most effective approach to learning. In this paper, we describe several variants of a training module in ICH E9 for new statisticians, combining directed reading with a game-based exercise, which have proved to be highly effective and enjoyable for course participants.  相似文献   

2.
The draft addendum to the ICH E9 regulatory guideline asks for explicit definition of the treatment effect to be estimated in clinical trials. The draft guideline also introduces the concept of intercurrent events to describe events that occur post‐randomisation that may affect efficacy assessment. Composite estimands allow incorporation of intercurrent events in the definition of the endpoint. A common example of an intercurrent event is discontinuation of randomised treatment and use of a composite strategy would assess treatment effect based on a variable that combines the outcome variable of interest with discontinuation of randomised treatment. Use of a composite estimand may avoid the need for imputation which would be required by a treatment policy estimand. The draft guideline gives the example of a binary approach for specifying a composite estimand. When the variable is measured on a non‐binary scale, other options are available where the intercurrent event is given an extreme unfavourable value, for example comparison of median values or analysis based on categories of response. This paper reviews approaches to deriving a composite estimand and contrasts the use of this estimand to the treatment policy estimand. The benefits of using each strategy are discussed and examples of the use of the different approaches are given for a clinical trial in nasal polyposis and a steroid reduction trial in severe asthma.  相似文献   

3.
Abstract

Many methods used in spatial statistics are computationally demanding, and so, the development of more computationally efficient methods has received attention. A important development is the integrated nested Laplace approximation method which is carry out Bayesian analysis more efficiently This method, for geostatistical data, is done considering the SPDE approach that requires the creation of a mesh overlying the study area and all the obtained results depend on it. The impact of the mesh on inference and prediction is investigated through simulations. As there is no formal procedure to specify it, we investigate a guideline to create an optimal mesh.  相似文献   

4.
The International Conference on Harmonisation guideline ‘Statistical Principles for Clinical Trials’ was adopted by the Committee for Proprietary Medicinal Products (CPMP) in March 1998, and consequently is operational in Europe. Since then more detailed guidance on selected topics has been issued by the CPMP in the form of ‘Points to Consider’ documents. The intent of these was to give guidance particularly to non‐statistical reviewers within regulatory authorities, although of course they also provide a good source of information for pharmaceutical industry statisticians. In addition, the Food and Drug Administration has recently issued a draft guideline on data monitoring committees. In November 2002 a one‐day discussion forum was held in London by Statisticians in the Pharmaceutical Industry (PSI). The aim of the meeting was to discuss how statisticians were responding to some of the issues covered in these new guidelines, and to document consensus views where they existed. The forum was attended by industry, academic and regulatory statisticians. This paper outlines the questions raised, resulting discussions and consensus views reached. It is clear from the guidelines and discussions at the workshop that the statistical analysis strategy must be planned during the design phase of a clinical trial and carefully documented. Once the study is complete the analysis strategy should be thoughtfully executed and the findings reported. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.  相似文献   

6.
A large body of literature exists on the techniques for selecting the important variables in linear regression analysis. Many of these techniques are ad hoc in nature and have not been studied from a theoretical viewpoint. In this paper we discuss some of the more commonly used techniques and propose a selection procedure based on the statistical selection and ranking approach. This procedure is easy to compute and apply. The procedure depends on the goodness of fit of the model and the total error associated with it.  相似文献   

7.
The additive Cox model is flexible and powerful for modelling the dynamic changes of regression coefficients in the survival analysis. This paper is concerned with feature screening for the additive Cox model with ultrahigh-dimensional covariates. The proposed screening procedure can effectively identify active predictors. That is, with probability tending to one, the selected variable set includes the actual active predictors. In order to carry out the proposed procedure, we propose an effective algorithm and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. Furthermore, we examine the finite sample performance of the proposed procedure via Monte Carlo simulations, and illustrate the proposed procedure by a real data example.  相似文献   

8.
The performance of commonly used asymptotic inference procedures for the random-effects model used in meta analysis relies on the number of studies. When the number of studies is moderate or small, the exact inference procedure is more reliable than the asymptotic counterparts. However, the related numerical computation may be demanding and an obstacle of routine use of the exact method. In this paper, we proposed a novel numerical algorithm for constructing the exact 95% confidence interval of the location parameter in the random-effects model. The algorithm is much faster than the naive method and may greatly facilitate the use of the more appropriate exact inference procedure in meta analysis. Numerical studies and real data examples are used to illustrate the advantage of the proposed method.  相似文献   

9.
ABSTRACT

In high-dimensional regression, the presence of influential observations may lead to inaccurate analysis results so that it is a prime and important issue to detect these unusual points before statistical regression analysis. Most of the traditional approaches are, however, based on single-case diagnostics, and they may fail due to the presence of multiple influential observations that suffer from masking effects. In this paper, an adaptive multiple-case deletion approach is proposed for detecting multiple influential observations in the presence of masking effects in high-dimensional regression. The procedure contains two stages. Firstly, we propose a multiple-case deletion technique, and obtain an approximate clean subset of the data that is presumably free of influential observations. To enhance efficiency, in the second stage, we refine the detection rule. Monte Carlo simulation studies and a real-life data analysis investigate the effective performance of the proposed procedure.  相似文献   

10.
In practice, the presence of influential observations may lead to misleading results in variable screening problems. We, therefore, propose a robust variable screening procedure for high-dimensional data analysis in this paper. Our method consists of two steps. The first step is to define a new high-dimensional influence measure and propose a novel influence diagnostic procedure to remove those unusual observations. The second step is to utilize the sure independence screening procedure based on distance correlation to select important variables in high-dimensional regression analysis. The new influence measure and diagnostic procedure that we developed are model free. To confirm the effectiveness of the proposed method, we conduct simulation studies and a real-life data analysis to illustrate the merits of the proposed approach over some competing methods. Both the simulation results and the real-life data analysis demonstrate that the proposed method can greatly control the adverse effect after detecting and removing those unusual observations, and performs better than the competing methods.  相似文献   

11.
In this paper, we propose a new clustering procedure for financial instruments. Unlike the prevalent clustering procedures based on time series analysis, our procedure employs the jump tail dependence coefficient as the dissimilarity measure, assuming that the observed logarithm of the prices/indices of the financial instruments are embedded into multidimensional Lévy processes. The efficiency of our proposed clustering procedure is tested by a simulation study. Finally, with the help of the real data of country indices we illustrate that our clustering procedure could help investors avoid potential huge losses when constructing portfolios.  相似文献   

12.
F. Auert  H. Läuter 《Statistics》2013,47(2):265-293
In this paper we give an approximation procedure to surfaces which are defined on a _p-dimensional region and are observable (disturbed with some noice) according to an experimental design. In this procedure we combine clustering methods, discriminant analysis and smoothing techniques.

In the second part of the paper is considered some investigations on statistical properties of linear smoothing procedures. We assume linear models and for a broad class of models we prove the consistence of the estimation of the expectation of observations after smoothing.

In the last section we give some results on efficiency.  相似文献   

13.
High dimensional models are getting much attention from diverse research fields involving very many parameters with a moderate size of data. Model selection is an important issue in such a high dimensional data analysis. Recent literature on theoretical understanding of high dimensional models covers a wide range of penalized methods including LASSO and SCAD. This paper presents a systematic overview of the recent development in high dimensional statistical models. We provide a brief review on the recent development of theory, methods, and guideline on applications of several penalized methods. The review includes appropriate settings to be implemented and limitations along with potential solution for each of the reviewed method. In particular, we provide a systematic review of statistical theory of the high dimensional methods by considering a unified high-dimensional modeling framework together with high level conditions. This framework includes (generalized) linear regression and quantile regression as its special cases. We hope our review helps researchers in this field to have a better understanding of the area and provides useful information to future study.  相似文献   

14.
In this paper we introduce a sequential seasonal unit root testing approach which explicitly addresses its application to high frequency data. The main idea is to see which unit roots at higher frequency data can also be found in temporally aggregated data. We illustrate our procedure to the analysis of monthly data, and we find, upon analysing the aggregated quarterly data, that a smaller amount of test statistics can sometimes be considered. Monte Carlo simulation and empirical illustrations emphasize the practical relevance of our method.  相似文献   

15.
Abstract.  Imagine we have two different samples and are interested in doing semi- or non-parametric regression analysis in each of them, possibly on the same model. In this paper, we consider the problem of testing whether a specific covariate has different impacts on the regression curve in these two samples. We compare the regression curves of different samples but are interested in specific differences instead of testing for equality of the whole regression function. Our procedure does allow for random designs, different sample sizes, different variance functions, different sets of regressors with different impact functions, etc. As we use the marginal integration approach, this method can be applied to any strong, weak or latent separable model as well as to additive interaction models to compare the lower dimensional separable components between the different samples. Thus, in the case of having separable models, our procedure includes the possibility of comparing the whole regression curves, thereby avoiding the curse of dimensionality. It is shown that bootstrap fails in theory and practice. Therefore, we propose a subsampling procedure with automatic choice of subsample size. We present a complete asymptotic theory and an extensive simulation study.  相似文献   

16.
Bootstrap in functional linear regression   总被引:1,自引:0,他引:1  
We have considered the functional linear model with scalar response and functional explanatory variable. One of the most popular methodologies for estimating the model parameter is based on functional principal components analysis (FPCA). In recent literature, weak convergence for a wide class of FPCA-type estimates has been proved, and consequently asymptotic confidence sets can be built. In this paper, we have proposed an alternative approach in order to obtain pointwise confidence intervals by means of a bootstrap procedure, for which we have obtained its asymptotic validity. Besides, a simulation study allows us to compare the practical behaviour of asymptotic and bootstrap confidence intervals in terms of coverage rates for different sample sizes.  相似文献   

17.
Meta-analytical approaches have been extensively used to analyze medical data. In most cases, the data come from different studies or independent trials with similar characteristics. However, these methods can be applied in a broader sense. In this paper, we show how existing meta-analytic techniques can also be used as well when dealing with parameters estimated from individual hierarchical data. Specifically, we propose to apply statistical methods that account for the variances (and possibly covariances) of such measures. The estimated parameters together with their estimated variances can be incorporated into a general linear mixed model framework. We illustrate the methodology by using data from a first-in-man study and a simulated data set. The analysis was implemented with the SAS procedure MIXED and example code is offered.  相似文献   

18.
The ICH harmonized tripartite guideline 'Statistical Principles for Clinical Trials', more commonly referred to as ICH E9, was adopted by the regulatory bodies of the European Union, Japan and the USA in 1998. This document united related guidance documents on statistical methodology from each of the three ICH regions, and meant that for the first time clear consistent guidance on statistical principles was available to those conducting and reviewing clinical trials. At the 10th anniversary of the guideline's adoption, this paper discusses the influence of ICH E9 by presenting a perspective on how approaches to some aspects of clinical trial design, conduct and analysis have changed in that time in the context of regulatory submissions in the European Union.  相似文献   

19.
In high dimensional classification problem, two stage method, reducing the dimension of predictor first and then applying the classification method, is a natural solution and has been widely used in many fields. The consistency of the two stage method is an important issue, since errors induced by dimension reduction method inevitably have impacts on the following classification method. As an effective method for classification problem, boosting has been widely used in practice. In this paper, we study the consistency of two stage method–dimension reduction based boosting algorithm (briefly DRB) for classification problem. Theoretical results show that Lipschitz condition on the base learner is required to guarantee the consistency of DRB. This theoretical findings provide useful guideline for application.  相似文献   

20.
Abstract

The frailties, representing extra variations due to unobserved measurements, are often assumed to be iid in shared frailty models. In medical applications, however, a speculation can arise that a data set might violate the iid assumption. In this paper we investigate this conjecture through an analysis of the kidney infection data in McGilchrist and Aisbett (McGilchrist, C. A., Aisbett, C. W. (1991). Regression with frailty in survival analysis. Biometrics 47:461–466). As a test procedure, we consider the cusum of squares test which is frequently used for monitoring a variance change in statistical models. Our result strongly sustains the heterogeneity of the frailty distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号