首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Concomitant Medications are medications used by patients in a clinical trial, other than the investigational drug. These data are routinely collected in clinical trials. The data are usually collected in a longitudinal manner, for the duration of patients' participation in the trial. The routine summaries of this data are incidence‐type, describing whether or not a medication was ever administered during the study. The longitudinal aspect of the data is essentially ignored. The aim of this article is to suggest exploratory methods for graphically displaying the longitudinal features of the data using a well‐established estimator called the ‘mean cumulative function’. This estimator permits summary and a graphical display of the data, and preparation of some statistical tests to compare between groups. This estimator may also incorporate information on censoring of patient data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.  相似文献   

3.
It is of interest that researchers study competing risks in which subjects may fail from any one of k causes. Comparing any two competing risks with covariate effects is very important in medical studies. In this paper, we develop tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels under the additive risk model by a weighted difference of estimates of cumulative cause-specific hazard rates. Motivated by McKeague et al. (2001), we construct simultaneous confidence bands for the difference of two conditional cumulative incidence functions as a useful graphical tool. In addition, we conduct a simulation study, and the simulation result shows that the proposed procedure has a good finite sample performance. A melanoma data set in clinical trial is used for the purpose of illustration.  相似文献   

4.
We present a flexible class of marginal models for the cumulative incidence function. The semiparametric transformation model is utilized in a decomposition for the marginal failure probabilities which extends previous work on Farewell's cure model. Novel estimation, inference and prediction procedures are developed, with large sample properties derived from the theory of martingales and U-statistics. A small simulation study demonstrates that the methods are appropriate for practical use. The methods are illustrated with a thorough analysis of a prostate cancer clinical trial. Simple graphical displays are used to check for the goodness of fit.  相似文献   

5.
The analysis of adverse events (AEs) is a key component in the assessment of a drug's safety profile. Inappropriate analysis methods may result in misleading conclusions about a therapy's safety and consequently its benefit‐risk ratio. The statistical analysis of AEs is complicated by the fact that the follow‐up times can vary between the patients included in a clinical trial. This paper takes as its focus the analysis of AE data in the presence of varying follow‐up times within the benefit assessment of therapeutic interventions. Instead of approaching this issue directly and solely from an analysis point of view, we first discuss what should be estimated in the context of safety data, leading to the concept of estimands. Although the current discussion on estimands is mainly related to efficacy evaluation, the concept is applicable to safety endpoints as well. Within the framework of estimands, we present statistical methods for analysing AEs with the focus being on the time to the occurrence of the first AE of a specific type. We give recommendations which estimators should be used for the estimands described. Furthermore, we state practical implications of the analysis of AEs in clinical trials and give an overview of examples across different indications. We also provide a review of current practices of health technology assessment (HTA) agencies with respect to the evaluation of safety data. Finally, we describe problems with meta‐analyses of AE data and sketch possible solutions.  相似文献   

6.
A computer-based graphical method for comparing a multi parameter log likelihood surface with its quadratic approximation is presented. The method can be used to visualize certain aspects of any highdimensional surface near a local maximum, Examples are given to illustrate the interpretation and use of the resulting plots.  相似文献   

7.
A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. Its by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial.  相似文献   

8.
In statistical modeling, we strive to specify models that resemble data collected in studies or observed from processes. Consequently, distributional specification and parameter estimation are central to parametric models. Graphical procedures, such as the quantile–quantile (QQ) plot, are arguably the most widely used method of distributional assessment, though critics find their interpretation to be overly subjective. Formal goodness of fit tests are available and are quite powerful, but only indicate whether there is a lack of fit, not why there is lack of fit. In this article, we explore the use of the lineup protocol to inject rigor into graphical distributional assessment and compare its power to that of formal distributional tests. We find that lineup tests are considerably more powerful than traditional tests of normality. A further investigation into the design of QQ plots shows that de-trended QQ plots are more powerful than the standard approach as long as the plot preserves distances in x and y to be the same. While we focus on diagnosing nonnormality, our approach is general and can be directly extended to the assessment of other distributions.  相似文献   

9.
Summary.  In survival data that are collected from phase III clinical trials on breast cancer, a patient may experience more than one event, including recurrence of the original cancer, new primary cancer and death. Radiation oncologists are often interested in comparing patterns of local or regional recurrences alone as first events to identify a subgroup of patients who need to be treated by radiation therapy after surgery. The cumulative incidence function provides estimates of the cumulative probability of locoregional recurrences in the presence of other competing events. A simple version of the Gompertz distribution is proposed to parameterize the cumulative incidence function directly. The model interpretation for the cumulative incidence function is more natural than it is with the usual cause-specific hazard parameterization. Maximum likelihood analysis is used to estimate simultaneously parametric models for cumulative incidence functions of all causes. The parametric cumulative incidence approach is applied to a data set from the National Surgical Adjuvant Breast and Bowel Project and compared with analyses that are based on parametric cause-specific hazard models and nonparametric cumulative incidence estimation.  相似文献   

10.
A dataset consisting of salaries of major league baseball players is published at the start of each season in USA Today, and is also made available on the Internet. It is argued that such an easily available dataset and those similar to it can be successfully used by students in a first statistics course for an interesting introduction to data analysis through summary measures and graphical displays. Such an approach is most natural for many students because of a strong interest in sports and economics. Other statistical ideas can be explored as a natural consequence of the discussions that ensue from such an analysis.  相似文献   

11.
In many clinical research applications the time to occurrence of one event of interest, that may be obscured by another??so called competing??event, is investigated. Specific interventions can only have an effect on the endpoint they address or research questions might focus on risk factors for a certain outcome. Different approaches for the analysis of time-to-event data in the presence of competing risks were introduced in the last decades including some new methodologies, which are not yet frequently used in the analysis of competing risks data. Cause-specific hazard regression, subdistribution hazard regression, mixture models, vertical modelling and the analysis of time-to-event data based on pseudo-observations are described in this article and are applied to a dataset of a cohort study intended to establish risk stratification for cardiac death after myocardial infarction. Data analysts are encouraged to use the appropriate methods for their specific research questions by comparing different regression approaches in the competing risks setting regarding assumptions, methodology and interpretation of the results. Notes on application of the mentioned methods using the statistical software R are presented and extensions to the presented standard methods proposed in statistical literature are mentioned.  相似文献   

12.
13.
The perception of food in Europe has been a topic of research for many years due to its importance in better understanding the role of food in helping to define the culture of a country. It is also important from a marketing perspective for identifying how consumers relate to food. Recently, this topic was discussed by Guerrero et al. (2010) who used a graphical statistical technique called correspondence analysis to identify the association between the countries that participated in the study and words that were linked with “Traditional” food. This paper explores the use of non-symmetrical correspondence analysis and provides an interpretation of the configuration of points in the graphical display in terms of its first four moments. In particular, we will focus on the skewness and kurtosis of such a configuration. Such measure's provide further detail on the nature of the association between the countries studied and the words linked with “Traditional” food.  相似文献   

14.
Generally, in the interpretation of clinical safety laboratory data, it is extreme values that indicate potential safety issues. We illustrate the application of multivariate extreme value modelling to such data. Applying the methods to a clinical trial dataset, we find unexpected extremal relationships that have potentially important implications for the interpretation of such data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
In biomedical and public health research, both repeated measures of biomarkers Y as well as times T to key clinical events are often collected for a subject. The scientific question is how the distribution of the responses [ T , Y | X ] changes with covariates X . [ T | X ] may be the focus of the estimation where Y can be used as a surrogate for T . Alternatively, T may be the time to drop-out in a study in which [ Y | X ] is the target for estimation. Also, the focus of a study might be on the effects of covariates X on both T and Y or on some underlying latent variable which is thought to be manifested in the observable outcomes. In this paper, we present a general model for the joint analysis of [ T , Y | X ] and apply the model to estimate [ T | X ] and other related functionals by using the relevant information in both T and Y . We adopt a latent variable formulation like that of Fawcett and Thomas and use it to estimate several quantities of clinical relevance to determine the efficacy of a treatment in a clinical trial setting. We use a Markov chain Monte Carlo algorithm to estimate the model's parameters. We illustrate the methodology with an analysis of data from a clinical trial comparing risperidone with a placebo for the treatment of schizophrenia.  相似文献   

16.
A second course in statistics, for nonstatisticians who will use packaged statistical software in their work, is outlined. The course is directed toward the wise choice, use, and evaluation of statistical computer packages. The goal of the course is to train educated consumers of statistical programs. Particular attention is paid to computer-based data analysis, interpretation of output, comparison of competing packages, and statistical problems that arise when computers are employed to analyze large data sets.  相似文献   

17.
Various procedures, mainly graphical are presented for analyzing large sets of ranking data in which the permutations are not equally likely. One method is based on box plots, the others are motivated by a model originally proposed by Mallows. The model is characterised by two parameters corresponding to location and dispersion. Graphical methods based on Q-Q plots are also discussed for comparing two groups of judges. The proposed methods are illustrated on an empirical data set.  相似文献   

18.
The linear transformation model is a semiparametric model which contains the Cox proportional hazards model and the proportional odds model as special cases. Cai et al. (Biometrika 87:867-878, 2000) have proposed an inference procedure for the linear transformation model with correlated censored observations. In this article, we develop formal and graphical model checking techniques for the linear transformation models based on cumulative sums of martingale-type residuals. The proposed method is illustrated with a clinical trial data.  相似文献   

19.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
李金昌 《统计研究》2014,31(1):10-15
最近,《大数据时代》等几本书引起了广泛的关注,大数据正在改变着人们的行为与思维,那么以数据为研究对象的统计学该如何应对?本文基于对大数据的理解,认为统计思维需要发生三个方面的改变:即认识数据的思维、收集数据的思维和分析数据的思维要改变。其中,数据分析思维又要在统计分析过程、实证分析思路、推断分析逻辑等方面发生变化,同时统计分析评价的标准也要有所调整。围绕这些变化,本文提出需要从八个方面去积极应对大数据,以促使统计学科跟上时代的步伐。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号