首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
The standard methods for analyzing data arising from a ‘thorough QT/QTc study’ are based on multivariate normal models with common variance structure for both drug and placebo. Such modeling assumptions may be violated and when the sample sizes are small, the statistical inference can be sensitive to such stringent assumptions. This article proposes a flexible class of parametric models to address the above‐mentioned limitations of the currently used models. A Bayesian methodology is used for data analysis and models are compared using the deviance information criteria. Superior performance of the proposed models over the current models is illustrated through a real dataset obtained from a GlaxoSmithKline (GSK) conducted ‘thorough QT/QTc study’. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Since the implementation of the International Conference on Harmonization (ICH) E14 guideline in 2005, regulators have required a “thorough QTc” (TQT) study for evaluating the effects of investigational drugs on delayed cardiac repolarization as manifested by a prolonged QTc interval. However, TQT studies have increasingly been viewed unfavorably because of their low cost effectiveness. Several researchers have noted that a robust drug concentration‐QTc (conc‐QTc) modeling assessment in early phase development should, in most cases, obviate the need for a subsequent TQT study. In December 2015, ICH released an “E14 Q&As (R3)” document supporting the use of conc‐QTc modeling for regulatory decisions. In this article, we propose a simple improvement of two popular conc‐QTc assessment methods for typical first‐in‐human crossover‐like single ascending dose clinical pharmacology trials. The improvement is achieved, in part, by leveraging routinely encountered (and expected) intrasubject correlation patterns encountered in such trials. A real example involving a single ascending dose and corresponding TQT trial, along with results from a simulation study, illustrate the strong performance of the proposed method. The improved conc‐QTc assessment will further enable highly reliable go/no‐go decisions in early phase clinical development and deliver results that support subsequent TQT study waivers by regulators.  相似文献   

3.
Linear mixed‐effects models (LMEMs) of concentration–double‐delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration–ddQTc interval model from a ‘typical’ thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between‐subject variance of the slope, increased the residual variance, and had no effect on the between‐subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation–extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation–extrapolation method could correct biased model parameter estimates to near‐unbiased levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Censoring of a longitudinal outcome often occurs when data are collected in a biomedical study and where the interest is in the survival and or longitudinal experiences of a study population. In the setting considered herein, we encountered upper and lower censored data as the result of restrictions imposed on measurements from a kinetic model producing “biologically implausible” kidney clearances. The goal of this paper is to outline the use of a joint model to determine the association between a censored longitudinal outcome and a time to event endpoint. This paper extends Guo and Carlin's [6] paper to accommodate censored longitudinal data, in a commercially available software platform, by linking a mixed effects Tobit model to a suitable parametric survival distribution. Our simulation results showed that our joint Tobit model outperforms a joint model made up of the more naïve or “fill-in” method for the longitudinal component. In this case, the upper and/or lower limits of censoring are replaced by the limit of detection. We illustrated the use of this approach with example data from the hemodialysis (HEMO) study [3] and examined the association between doubly censored kidney clearance values and survival.  相似文献   

5.
Clinical trials often assess whether or not subjects have a disease at predetermined follow-up times. When the response of interest is a recurrent event, a subject may respond at multiple follow-up times over the course of the study. Alternatively, when the response of interest is an irreversible event, a subject is typically only observed until the time at which the response is first detected. However, some recent studies have recorded subjects responses at follow-up times after an irreversible event is initially observed. This study compares how existing models perform when failure time data are treated as recurrent events.  相似文献   

6.
Nonlinear mixed‐effects (NLME) modeling is one of the most powerful tools for analyzing longitudinal data especially under the sparse sampling design. The determinant of the Fisher information matrix is a commonly used global metric of the information that can be provided by the data under a given model. However, in clinical studies, it is also important to measure how much information the data provide for a certain parameter of interest under the assumed model, for example, the clearance in population pharmacokinetic models. This paper proposes a new, easy‐to‐interpret information metric, the “relative information” (RI), which is designed for specific parameters of a model and takes a value between 0% and 100%. We establish the relationship between interindividual variability for a specific parameter and the variance of the associated parameter estimator, demonstrating that, under a “perfect” experiment (eg, infinite samples or/and minimum experimental error), the RI and the variance of the model parameter estimator converge, respectively, to 100% and the ratio of the interindividual variability for that parameter and the number of subjects. Extensive simulation experiments and analyses of three real datasets show that our proposed RI metric can accurately characterize the information for parameters of interest for NLME models. The new information metric can be readily used to facilitate study designs and model diagnosis.  相似文献   

7.
In this study, we investigated the robustness of the methods that account for independent left truncation when applied to competing risks settings with dependent left truncation. We specifically focused on the methods for the proportional cause-specific hazards model and the Fine–Gray model. Simulation experiments showed that these methods are not in general robust against dependent left truncation. The magnitude of the bias was analogous to the strength of the association between left truncation and failure times, the effect of the covariate on the competing cause of failure, and the baseline hazard of left truncation time.  相似文献   

8.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

9.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed‐effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi‐level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed‐effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real‐life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
12.
13.
The authors provide a rigorous large sample theory for linear models whose response variable has been subjected to the Box‐Cox transformation. They provide a continuous asymptotic approximation to the distribution of estimators of natural parameters of the model. They show, in particular, that the maximum likelihood estimator of the ratio of slope to residual standard deviation is consistent and relatively stable. The authors further show the importance for inference of normality of the errors and give tests for normality based on the estimated residuals. For non‐normal errors, they give adjustments to the log‐likelihood and to asymptotic standard errors.  相似文献   

14.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

15.
The mixed effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications of the methods to circular models and longitudinal data are discussed. Furthermore, simulation results indicate that better estimates of variance components do not necessarily imply higher power of the tests or shorter confidence intervals.  相似文献   

16.
On the basis of the idea of the Nadaraya–Watson (NW) kernel smoother and the technique of the local linear (LL) smoother, we construct the NW and LL estimators of conditional mean functions and their derivatives for a left‐truncated and right‐censored model. The target function includes the regression function, the conditional moment and the conditional distribution function as special cases. It is assumed that the lifetime observations with covariates form a stationary α‐mixing sequence. Asymptotic normality of the estimators is established. Finite sample behaviour of the estimators is investigated via simulations. A real data illustration is included too.  相似文献   

17.
Conservation biology aims at assessing the status of a population, based on information which is often incomplete. Integrated population modelling based on state‐space models appears to be a powerful and relevant way of combining into a single likelihood several types of information such as capture‐recapture data and population surveys. In this paper, the authors describe the principles of integrated population modelling and they evaluate its performance for conservation biology based on a case study, that of the black‐footed albatross, a northern Pacific albatross species suspected to be impacted by longline fishing  相似文献   

18.
This study demonstrates the decomposition of seasonality and long‐term trend in seismological data observed at irregular time intervals. The decomposition was applied to the estimation of earthquake detection capability using cubic B‐splines and a Bayesian approach, which is similar to the seasonal adjustment model frequently used to analyse economic time‐series data. We employed numerical simulation to verify the method and then applied it to real earthquake datasets obtained in and around the northern Honshu island, Japan. With this approach, we obtained the seasonality of the detection capability related to the annual variation of wind speed and the long‐term trend corresponding to the recent improvement of the seismic network in the studied region.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号