首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 672 毫秒
1.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

2.
The use of surrogate variables has been proposed as a means to capture, for a given observed set of data, sources driving the dependency structure among high-dimensional sets of features and remove the effects of those sources and their potential negative impact on simultaneous inference. In this article we illustrate the potential effects of latent variables on testing dependence and the resulting impact on multiple inference, we briefly review the method of surrogate variable analysis proposed by Leek and Storey (PNAS 2008; 105:18718-18723), and assess that method via simulations intended to mimic the complexity of feature dependence observed in real-world microarray data. The method is also assessed via application to a recent Merck microarray data set. Both simulation and case study results indicate that surrogate variable analysis can offer a viable strategy for tackling the multiple testing dependence problem when the features follow a potentially complex correlation structure, yielding improvements in the variability of false positive rates and increases in power.  相似文献   

3.
《统计学通讯:理论与方法》2012,41(16-17):3150-3161
We consider a new approach to deal with non ignorable non response on an outcome variable, in a causal inference framework. Assuming that a binary instrumental variable for non response is available, we provide a likelihood-based approach to identify and estimate heterogeneous causal effects of a binary treatment on specific latent subgroups of units, named principal strata, defined by the non response behavior under each level of the treatment and of the instrument. We show that, within each stratum, non response is ignorable and respondents can be properly compared by treatment status. In order to assess our method and its robustness when the usually invoked assumptions are relaxed or misspecified, we simulate data to resemble a real experiment conducted on a panel survey which compares different methods of reducing panel attrition.  相似文献   

4.
Summary.  There is a large literature on methods of analysis for randomized trials with noncompliance which focuses on the effect of treatment on the average outcome. The paper considers evaluating the effect of treatment on the entire distribution and general functions of this effect. For distributional treatment effects, fully non-parametric and fully parametric approaches have been proposed. The fully non-parametric approach could be inefficient but the fully parametric approach is not robust to the violation of distribution assumptions. We develop a semiparametric instrumental variable method based on the empirical likelihood approach. Our method can be applied to general outcomes and general functions of outcome distributions and allows us to predict a subject's latent compliance class on the basis of an observed outcome value in observed assignment and treatment received groups. Asymptotic results for the estimators and likelihood ratio statistic are derived. A simulation study shows that our estimators of various treatment effects are substantially more efficient than the currently used fully non-parametric estimators. The method is illustrated by an analysis of data from a randomized trial of an encouragement intervention to improve adherence to prescribed depression treatments among depressed elderly patients in primary care practices.  相似文献   

5.
Outlier detection has been used extensively in data analysis to detect anomalous observation in data. It has important applications such as in fraud detection and robust analysis, among others. In this paper, we propose a method in detecting multiple outliers in linear functional relationship model for circular variables. Using the residual values of the Caires and Wyatt model, we applied the hierarchical clustering approach. With the use of a tree diagram, we illustrate the detection of outliers graphically. A Monte Carlo simulation study is done to verify the accuracy of the proposed method. Low probability of masking and swamping effects indicate the validity of the proposed approach. Also, the illustrations to two sets of real data are given to show its practical applicability.  相似文献   

6.
Centroid-based partitioning cluster analysis is a popular method for segmenting data into more homogeneous subgroups. Visualization can help tremendously to understand the positions of these subgroups relative to each other in higher dimensional spaces and to assess the quality of partitions. In this paper we present several improvements on existing cluster displays using neighborhood graphs with edge weights based on cluster separation and convex hulls of inner and outer cluster regions. A new display called shadow-stars can be used to diagnose pairwise cluster separation with respect to the distribution of the original data. Artificial data and two case studies with real data are used to demonstrate the techniques.  相似文献   

7.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Large, family-based imaging studies can provide a better understanding of the interactions of environmental and genetic influences on brain structure and function. The interpretation of imaging data from large family studies, however, has been hindered by the paucity of well-developed statistical tools for that permit the analysis of complex imaging data together with behavioral and clinical data. In this paper, we propose to use two methods for these analyses. First, a variance components model along with score statistics is used to test linear hypotheses of unknown parameters, such as the associations of brain measures (e.g., cortical and subcortical surfaces) with their potential genetic determinants. Second, we develop a test procedure based on a resampling method to assess simultaneously the statistical significance of linear hypotheses across the entire brain. The value of these methods lies in their computational simplicity and in their applicability to a wide range of imaging data. Simulation studies show that our test procedure can accurately control the family-wise error rate. We apply our methods to the detection of statistical significance of gender-by-age interactions and of the effects of genetic variation on the thickness of the cerebral cortex in a family study of major depressive disorder.  相似文献   

9.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

10.
Clinical trials are primarily conducted to understand the average effects treatments have on patients. However, patients are heterogeneous in the severity of the condition and in ways that affect what treatment effect they can expect. It is therefore important to understand and characterize how treatment effects vary. The design and analysis of clinical studies play critical roles in evaluating and characterizing heterogeneous treatment effects. This panel discussed considerations in design and analysis under the recognition that there are heterogeneous treatment effects across subgroups of patients. Panel members discussed many questions including: What is a good estimate of the treatment effect in me, a 65-year-old, bald, Caucasian-American, male patient? What magnitude of heterogeneity of treatment effects (HTE) is sufficiently large to merit attention? What role can prior evidence about HTE play in confirmatory trial design and analysis? Is there anything described in the 21st Century Cures Act that would benefit from greater attention to HTE? An example of a Bayesian approach addressing multiplicity when testing for treatment effects in subgroups will be provided. We can do more or better at understanding heterogeneous treatment effects and providing the best information on heterogeneous treatment effects.  相似文献   

11.
We propose a multivariate tobit (MT) latent variable model that is defined by a confirmatory factor analysis with covariates for analysing the mixed type data, which is inherently non-negative and sometimes has a large proportion of zeros. Some useful MT models are special cases of our proposed model. To obtain maximum likelihood estimates, we use the expectation maximum algorithm with its E-step via the Gibbs sampler made feasible by Monte Carlo simulation and its M-step greatly simplified by a sequence of conditional maximization. Standard errors are evaluated by inverting a Monte Carlo approximation of the information matrix using Louis's method. The methodology is illustrated with a simulation study and a real example.  相似文献   

12.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

13.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

14.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   

15.
We introduce a new two-sample inference procedure to assess the relative performance of two groups over time. Our model-free method does not assume proportional hazards, making it suitable for scenarios where nonproportional hazards may exist. Our procedure includes a diagnostic tau plot to identify changes in hazard timing and a formal inference procedure. The tau-based measures we develop are clinically meaningful and provide interpretable estimands to summarize the treatment effect over time. Our proposed statistic is a U-statistic and exhibits a martingale structure, allowing us to construct confidence intervals and perform hypothesis testing. Our approach is robust with respect to the censoring distribution. We also demonstrate how our method can be applied for sensitivity analysis in scenarios with missing tail information due to insufficient follow-up. Without censoring, Kendall's tau estimator we propose reduces to the Wilcoxon-Mann–Whitney statistic. We evaluate our method using simulations to compare its performance with the restricted mean survival time and log-rank statistics. We also apply our approach to data from several published oncology clinical trials where nonproportional hazards may exist.  相似文献   

16.
17.
ABSTRACT

In this paper, we propose an adaptive stochastic gradient boosting tree for classification studies with imbalanced data. The adjustment of cost-sensitivity and the predictive threshold are integrated together with a composite criterion into the original stochastic gradient boosting tree to deal with the issues of the imbalanced data structure. Numerical study shows that the proposed method can significantly enhance the classification accuracy for the minority class with only a small loss in the true negative rate for the majority class. We discuss the relation of the cost-sensitivity to the threshold manipulation using simulations. An illustrative example of the analysis of suboptimal health-state data in traditional Chinese medicine is discussed.  相似文献   

18.
A boxplot is a simple and effective exploratory data analysis tool for graphically summarizing a distribution of data. However, in cases where the quartiles in a boxplot are inaccurately estimated, these estimates can affect subsequent analyses. In this paper, we consider the problem of constructing boxplots in a bivariate setting with a categorical covariate with multiple subgroups, and assume that some of these boxplots can be clustered. We propose to use this grouping property to improve the estimation of the quartiles. We demonstrate that the proposed method more accurately estimates the quartiles compared to the usual boxplot. It is also shown that the proposed method identifies outliers effectively as a consequence of accurate quartiles, and possesses a clustering effect due to the group property. We then apply the proposed method to annual maximum precipitation data in South Korea and present its clustering results.  相似文献   

19.
Parameter estimation is the first step in constructing control charts. One of these parameters is the process mean. The classical estimators of the process mean are sensitive to the presence of outlying data and subgroups which contaminate the whole data. In existing robust estimators for the process mean, the effects of the presence of the individual outliers are being considered, while, in this paper, a robust estimator is being proposed to reduce the effect of outlying subgroups as well as the individual outliers within a subgroup. The proposed estimator was compared with some classical and robust estimators of the process mean. Although, its relative efficiency is fourth among the estimators tested, its robustness and efficiency are large when the outlying subgroups are present. Evaluation of the results indicated that the proposed estimator is less sensitive to the presence of outliers and the process mean performs well when there are no individual outliers or outlying subgroups.  相似文献   

20.
Objectives in many longitudinal studies of individuals infected with the human immunodeficiency virus (HIV) include the estimation of population average trajectories of HIV ribonucleic acid (RNA) over time and tests for differences in trajectory across subgroups. Special features that are often inherent in the underlying data include a tendency for some HIV RNA levels to be below an assay detection limit, and for individuals with high initial levels or high ranges of change to drop out of the study early because of illness or death. We develop a likelihood for the observed data that incorporates both of these features. Informative drop-outs are handled by means of an approach previously published by Schluchter. Using data from the HIV Epidemiology Research Study, we implement a maximum likelihood procedure to estimate initial HIV RNA levels and slopes within a population, compare these parameters across subgroups of HIV-infected women and illustrate the importance of appropriate treatment of left censoring and informative drop-outs. We also assess model assumptions and consider the prediction of random intercepts and slopes in this setting. The results suggest that marked bias in estimates of fixed effects, variance components and standard errors in the analysis of HIV RNA data might be avoided by the use of methods like those illustrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号