首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Heterogeneity is an enormously complex problem because there are so many dimensions and variables that can be considered when assessing which ones may influence an efficacy or safety outcome for an individual patient. This is difficult in randomized controlled trials and even more so in observational settings. An alternative approach is presented in which the individual patient becomes the “subgroup,” and similar patients are identified in the clinical trial database or electronic medical record that can be used to predict how that individual patient may respond to treatment.  相似文献   

2.
Before biomarkers can be used in clinical trials or patients' management, the laboratory assays that measure their levels have to go through development and analytical validation. One of the most critical performance metrics for validation of any assay is related to the minimum amount of values that can be detected and any value below this limit is referred to as below the limit of detection (LOD). Most of the existing approaches that model such biomarkers, restricted by LOD, are parametric in nature. These parametric models, however, heavily depend on the distributional assumptions, and can result in loss of precision under the model or the distributional misspecifications. Using an example from a prostate cancer clinical trial, we show how a critical relationship between serum androgen biomarker and a prognostic factor of overall survival is completely missed by the widely used parametric Tobit model. Motivated by this example, we implement a semiparametric approach, through a pseudo-value technique, that effectively captures the important relationship between the LOD restricted serum androgen and the prognostic factor. Our simulations show that the pseudo-value based semiparametric model outperforms a commonly used parametric model for modeling below LOD biomarkers by having lower mean square errors of estimation.  相似文献   

3.
Observation of adverse drug reactions during drug development can cause closure of the whole programme. However, if association between the genotype and the risk of an adverse event is discovered, then it might suffice to exclude patients of certain genotypes from future recruitment. Various sequential and non‐sequential procedures are available to identify an association between the whole genome, or at least a portion of it, and the incidence of adverse events. In this paper we start with a suspected association between the genotype and the risk of an adverse event and suppose that the genetic subgroups with elevated risk can be identified. Our focus is determination of whether the patients identified as being at risk should be excluded from further studies of the drug. We propose using a utility function to determine the appropriate action, taking into account the relative costs of suffering an adverse reaction and of failing to alleviate the patient's disease. Two illustrative examples are presented, one comparing patients who suffer from an adverse event with contemporary patients who do not, and the other making use of a reference control group. We also illustrate two classification methods, LASSO and CART, for identifying patients at risk, but we stress that any appropriate classification method could be used in conjunction with the proposed utility function. Our emphasis is on determining the action to take rather than on providing definitive evidence of an association. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
基于统计学变量筛选方法的心理测验题目的维度识别   总被引:1,自引:0,他引:1  
近年来多维心理测验被广泛应用于各类评估,虽然编制测验时知道整个测验考察的潜在特质(或称为维度),但是测验题目具体考察的维度仍需确定。借助多维项目反应理论模型与广义线性模型的关系,使用LASSO和弹性网两种变量筛选方法,可解决测验题目的维度识别问题。模拟研究发现,LASSO方法比弹性网方法具有更好的维度识别效果,前者对不同类型的多维测验具有较高的维度识别准确率。  相似文献   

5.
The analysis of adverse events (AEs) is a key component in the assessment of a drug's safety profile. Inappropriate analysis methods may result in misleading conclusions about a therapy's safety and consequently its benefit‐risk ratio. The statistical analysis of AEs is complicated by the fact that the follow‐up times can vary between the patients included in a clinical trial. This paper takes as its focus the analysis of AE data in the presence of varying follow‐up times within the benefit assessment of therapeutic interventions. Instead of approaching this issue directly and solely from an analysis point of view, we first discuss what should be estimated in the context of safety data, leading to the concept of estimands. Although the current discussion on estimands is mainly related to efficacy evaluation, the concept is applicable to safety endpoints as well. Within the framework of estimands, we present statistical methods for analysing AEs with the focus being on the time to the occurrence of the first AE of a specific type. We give recommendations which estimators should be used for the estimands described. Furthermore, we state practical implications of the analysis of AEs in clinical trials and give an overview of examples across different indications. We also provide a review of current practices of health technology assessment (HTA) agencies with respect to the evaluation of safety data. Finally, we describe problems with meta‐analyses of AE data and sketch possible solutions.  相似文献   

6.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

7.
The author extends to the Bayesian nonparametric context the multinomial goodness‐of‐fit tests due to Cressie & Read (1984). Her approach is suitable when the model of interest is a discrete distribution. She provides an explicit form for the tests, which are based on power‐divergence measures between a prior Dirichlet process that is highly concentrated around the model of interest and the corresponding posterior Dirichlet process. In addition to providing interesting special cases and useful approximations, she discusses calibration and the choice of test through examples.  相似文献   

8.
The authors give the exact coefficient of 1/N in a saddlepoint approximation to the Wilcoxon‐Mann‐Whitney null‐distribution. This saddlepoint approximation is obtained from an Edgeworth approximation to the exponentially tilted distribution. Moreover, the rate of convergence of the relative error is uniformly of order O (1/N) in a large deviation interval as defined in Feller (1971). The proposed method for computing the coefficient of 1/N can be used to obtain the exact coefficients of 1/Ni, for any i. The exact formulas for the cumulant generating function and the cumulants, needed for these results, are those of van Dantzig (1947‐1950).  相似文献   

9.
Recent guidance on safety monitoring during drug development, issued by regulatory authorities in the United States and European Union, indicate a shift in focus towards aggregate safety monitoring and scientific evaluation of integrated safety data. The call for program‐level reviews of accumulating safety data, including from ongoing studies, provides an opportunity to leverage the scientific expertise and medical judgment of safety management teams with (a) a multidisciplinary approach, (b) quantitative frameworks to measure level of evidence, and (c) assessments that are product‐specific and driven by medical judgment. A multidisciplinary team, regularly reviewing aggregate safety data throughout the development program, is vital not only for early signal detection but also for generating a better understanding of the accumulating data and context needed for decreasing false alarms.  相似文献   

10.
In recent years, immunological science has evolved, and cancer vaccines are available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected. Accordingly, the use of weighted log‐rank tests with the Fleming–Harrington class of weights is proposed for evaluation of survival endpoints. We present a method for calculating the sample size under assumption of a piecewise exponential distribution for the cancer vaccine group and an exponential distribution for the placebo group as the survival model. The impact of delayed effect timing on both the choice of the Fleming–Harrington's weights and the increment in the required number of events is discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
12.
Summary.  A new class of distributions for exchangeable binary data is proposed that originates from modelling the joint success probabilities of all orders by a power family of completely monotone functions. The distribution proposed allows flexible modelling of the dose–response relationship for both the marginal response probability and the pairwise odds ratio and is especially well suited for a litter-based approach to risk assessment. Specifically, the risk of at least one adverse response within a litter takes on a simple form under the distribution proposed and can be reduced further to a generalized linear model if a complementary log–log-link function is used. Existing distributions such as the beta–binomial or folded logistic functions have a tendency to assign too much probability to zero, leading to an underestimation of the risk that at least one foetus is affected and an overestimation of the safe dose. The distribution proposed does not suffer from this problem. With the aid of symbolic differentiation, the distribution proposed can be fitted easily and quickly via the method of scoring. The usefulness of the class of distributions proposed and its superiority over existing distributions are demonstrated in a series of examples involving developmental toxicology and teratology data.  相似文献   

13.
This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four‐step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose‐finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H‐algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design. © 2014 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   

14.
Non‐response is a common problem in survey sampling and this phenomenon can only be ignored at the risk of invalidating inferences from a survey. In order to adjust for unit non‐response, the authors propose a weighting method in which kernel regression is used to estimate the response probabilities. They show that the adjusted estimator is consistent and they derive its asymptotic distribution. They also suggest a means of estimating its variance through a replication‐based technique. Furthermore, a Monte Carlo study allows them to illustrate the properties of the non‐response adjustment and its variance estimator.  相似文献   

15.
Estimating the effect of medical treatments on subject responses is one of the crucial problems in medical research. Matched‐pairs designs are commonly implemented in the field of medical research to eliminate confounding and improve efficiency. In this article, new estimators of treatment effects for heterogeneous matched‐pairs data are proposed. Asymptotic properties of the proposed estimators are derived. Simulation studies show that the proposed estimators have some advantages over the famous Heckman's estimator, the conditional maximum likelihood estimator, and the inverse probability weighted estimator. We apply the proposed methodology to a data set from a study of low‐birth‐weight infants.  相似文献   

16.
Testing goodness‐of‐fit of commonly used genetic models is of critical importance in many applications including association studies and testing for departure from Hardy–Weinberg equilibrium. Case–control design has become widely used in population genetics and genetic epidemiology, thus it is of interest to develop powerful goodness‐of‐fit tests for genetic models using case–control data. This paper develops a likelihood ratio test (LRT) for testing recessive and dominant models for case–control studies. The LRT statistic has a closed‐form formula with a simple $\chi^{2}(1)$ null asymptotic distribution, thus its implementation is easy even for genome‐wide association studies. Moreover, it has the same power and optimality as when the disease prevalence is known in the population. The Canadian Journal of Statistics 41: 341–352; 2013 © 2013 Statistical Society of Canada  相似文献   

17.
A goodness‐of‐fit procedure is proposed for parametric families of copulas. The new test statistics are functionals of an empirical process based on the theoretical and sample versions of Spearman's dependence function. Conditions under which this empirical process converges weakly are seen to hold for many families including the Gaussian, Frank, and generalized Farlie–Gumbel–Morgenstern systems of distributions, as well as the models with singular components described by Durante [Durante ( 2007 ) Comptes Rendus Mathématique. Académie des Sciences. Paris, 344, 195–198]. Thanks to a parametric bootstrap method that allows to compute valid P‐values, it is shown empirically that tests based on Cramér–von Mises distances keep their size under the null hypothesis. Simulations attesting the power of the newly proposed tests, comparisons with competing procedures and complete analyses of real hydrological and financial data sets are presented. The Canadian Journal of Statistics 37: 80‐101; 2009 © 2009 Statistical Society of Canada  相似文献   

18.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   

20.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号