首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Baseline adjustment is an important consideration in thorough QT studies for non‐antiarrhythmic drugs. For crossover studies with period‐specific pre‐dose baselines, we propose a by‐time‐point analysis of covariance model with change from pre‐dose baseline as response, treatment as a fixed effect, pre‐dose baseline for current treatment and pre‐dose baseline averaged across treatments as covariates, and subject as a random effect. Additional factors such as period and sex should be included in the model as appropriate. Multiple pre‐dose measurements can be averaged to obtain a pre‐dose‐averaged baseline and used in the model. We provide conditions under which the proposed model is more efficient than other models. We demonstrate the efficiency and robustness of the proposed model both analytically and through simulation studies. The advantage of the proposed model is also illustrated using the data from a real clinical trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
The revised ICH E14 Question and Answer (R3) document issued in December 2015 enables pharmaceutical companies to use concentration‐QTc (C‐QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. A new approach by including the time effect into the current C‐QTc model is introduced. Through a simulation study, we evaluated performances of different C‐QTc modeling with different dependent variables, covariates, and covariance structures. This simulation study shows that C‐QTc models with ΔQTc being dependent variable without time effect inflate false negative rate and that fitting C‐QTc models with different dependent variables, covariates, and covariance structures impacts the control of false negative and false positive rates. Appropriate C‐QTc modeling strategies with good control of false negative rate and false positive rate are recommended.  相似文献   

3.
Since the implementation of the International Conference on Harmonization (ICH) E14 guideline in 2005, regulators have required a “thorough QTc” (TQT) study for evaluating the effects of investigational drugs on delayed cardiac repolarization as manifested by a prolonged QTc interval. However, TQT studies have increasingly been viewed unfavorably because of their low cost effectiveness. Several researchers have noted that a robust drug concentration‐QTc (conc‐QTc) modeling assessment in early phase development should, in most cases, obviate the need for a subsequent TQT study. In December 2015, ICH released an “E14 Q&As (R3)” document supporting the use of conc‐QTc modeling for regulatory decisions. In this article, we propose a simple improvement of two popular conc‐QTc assessment methods for typical first‐in‐human crossover‐like single ascending dose clinical pharmacology trials. The improvement is achieved, in part, by leveraging routinely encountered (and expected) intrasubject correlation patterns encountered in such trials. A real example involving a single ascending dose and corresponding TQT trial, along with results from a simulation study, illustrate the strong performance of the proposed method. The improved conc‐QTc assessment will further enable highly reliable go/no‐go decisions in early phase clinical development and deliver results that support subsequent TQT study waivers by regulators.  相似文献   

4.
Baseline adjustment is an important consideration in thorough QT studies for nonantiarrhythmic drugs. For crossover studies with period‐specific baseline days, we propose an analysis of covariance model with change from time‐matched baseline as response, time‐matched baseline for the current treatment, day‐averaged baseline for the current treatment, time‐matched baseline averaged across treatments, and day‐averaged baseline averaged across treatments as covariates. This model adjusts for within‐subject diurnal effects for each treatment and is more efficient than commonly used models for treatment comparisons. We illustrate the benefit using real clinical trial data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
This paper describes the analysis of two pharmacology assays: the guinea pig papillary muscle assay (an example of an isolated tissue assay) and an assay looking at pressure changes in isolated rat lungs. Both assays use an ascending dose design to minimize carryover effects. This is often necessary in these studies, due to the limited life span of the tissues. Various mixed models, with different covariance structures, are fitted to find the most appropriate model. These are then compared to two other possible methods of analysis: paired t‐tests and two‐way analysis of variance. For both assays, the mixed model was found to be the best approach. These examples illustrate the importance of modelling covariance structure correctly in any ascending dose study, whether in isolated organs/tissues, in animals or phase I volunteers. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

6.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

7.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

8.
Abstract

This article proposes a new approach to analyze multiple vector autoregressive (VAR) models that render us a newly constructed matrix autoregressive (MtAR) model based on a matrix-variate normal distribution with two covariance matrices. The MtAR is a generalization of VAR models where the two covariance matrices allow the extension of MtAR to a structural MtAR analysis. The proposed MtAR can also incorporate different lag orders across VAR systems that provide more flexibility to the model. The estimation results from a simulation study and an empirical study on macroeconomic application show favorable performance of our proposed models and method.  相似文献   

9.
This paper presents the empirical likelihood inferences for a class of varying-coefficient models with error-prone covariates. We focus on the case that the covariance matrix of the measurement errors is unknown and neither repeated measurements nor validation data are available. We propose an instrumental variable-based empirical likelihood inference method and show that the proposed empirical log-likelihood ratio is asymptotically chi-squared. Then, the confidence intervals for the varying-coefficient functions are constructed. Some simulation studies and a real data application are used to assess the finite sample performance of the proposed empirical likelihood procedure.  相似文献   

10.
We propose a new bivariate negative binomial model with constant correlation structure, which was derived from a contagious bivariate distribution of two independent Poisson mass functions, by mixing the proposed bivariate gamma type density with constantly correlated covariance structure (Iwasaki & Tsubaki, 2005), which satisfies the integrability condition of McCullagh & Nelder (1989, p. 334). The proposed bivariate gamma type density comes from a natural exponential family. Joe (1997) points out the necessity of a multivariate gamma distribution to derive a multivariate distribution with negative binomial margins, and the luck of a convenient form of multivariate gamma distribution to get a model with greater flexibility in a dependent structure with indices of dispersion. In this paper we first derive a new bivariate negative binomial distribution as well as the first two cumulants, and, secondly, formulate bivariate generalized linear models with a constantly correlated negative binomial covariance structure in addition to the moment estimator of the components of the matrix. We finally fit the bivariate negative binomial models to two correlated environmental data sets.  相似文献   

11.
In this article, we present a framework of estimating patterned covariance of interest in the multivariate linear models. The main idea in it is to estimate a patterned covariance by minimizing a trace distance function between outer product of residuals and its expected value. The proposed framework can provide us explicit estimators, called outer product least-squares estimators, for parameters in the patterned covariance of the multivariate linear model without or with restrictions on regression coefficients. The outer product least-squares estimators enjoy the desired properties in finite and large samples, including unbiasedness, invariance, consistency and asymptotic normality. We still apply the framework to three special situations where their patterned covariances are the uniform correlation, a generalized uniform correlation and a general q-dependence structure, respectively. Simulation studies for three special cases illustrate that the proposed method is a competent alternative of the maximum likelihood method in finite size samples.  相似文献   

12.
Abstract. In geophysical and environmental problems, it is common to have multiple variables of interest measured at the same location and time. These multiple variables typically have dependence over space (and/or time). As a consequence, there is a growing interest in developing models for multivariate spatial processes, in particular, the cross‐covariance models. On the other hand, many data sets these days cover a large portion of the Earth such as satellite data, which require valid covariance models on a globe. We present a class of parametric covariance models for multivariate processes on a globe. The covariance models are flexible in capturing non‐stationarity in the data yet computationally feasible and require moderate numbers of parameters. We apply our covariance model to surface temperature and precipitation data from an NCAR climate model output. We compare our model to the multivariate version of the Matérn cross‐covariance function and models based on coregionalization and demonstrate the superior performance of our model in terms of AIC (and/or maximum loglikelihood values) and predictive skill. We also present some challenges in modelling the cross‐covariance structure of the temperature and precipitation data. Based on the fitted results using full data, we give the estimated cross‐correlation structure between the two variables.  相似文献   

13.
Abstract

Although stochastic volatility and GARCH (generalized autoregressive conditional heteroscedasticity) models have successfully described the volatility dynamics of univariate asset returns, extending them to the multivariate models with dynamic correlations has been difficult due to several major problems. First, there are too many parameters to estimate if available data are only daily returns, which results in unstable estimates. One solution to this problem is to incorporate additional observations based on intraday asset returns, such as realized covariances. Second, since multivariate asset returns are not synchronously traded, we have to use the largest time intervals such that all asset returns are observed to compute the realized covariance matrices. However, in this study, we fail to make full use of the available intraday informations when there are less frequently traded assets. Third, it is not straightforward to guarantee that the estimated (and the realized) covariance matrices are positive definite.

Our contributions are the following: (1) we obtain the stable parameter estimates for the dynamic correlation models using the realized measures, (2) we make full use of intraday informations by using pairwise realized correlations, (3) the covariance matrices are guaranteed to be positive definite, (4) we avoid the arbitrariness of the ordering of asset returns, (5) we propose the flexible correlation structure model (e.g., such as setting some correlations to be zero if necessary), and (6) the parsimonious specification for the leverage effect is proposed. Our proposed models are applied to the daily returns of nine U.S. stocks with their realized volatilities and pairwise realized correlations and are shown to outperform the existing models with respect to portfolio performances.  相似文献   

14.
The Liouville and Generalized Liouville families have been proposed as parametric models for data constrained to the simplex. These families have generated practical interest owing primarily to inadequacies, such as a completely negative covariance structure, that are inherent in the better-known Dirichlet class. Although there is some numerical evidence suggesting that the Liouville and Generalized Liouville families can produce completely positive and mixed covariance structures, no general paradigms have been developed. Research toward this end might naturally be focused on the many classical "positive dependence" concepts available in the literature, all of which imply a nonnegative covariance structure. However, in this article it is shown that no strictly positive distribution on the simplex can possess any of these classical dependence properties. The same result holds for Liouville and generalized Liouville distributions even if the condition of strict positivity is relaxed.  相似文献   

15.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   

16.
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within-subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low-sample size data that preclude using standard likelihood-based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrate the approaches appeal.  相似文献   

17.
This paper deals with the problem of quadratic unbiased estimation for models with linear Toeplitz covariance structure. These serial covariance models are very useful to modelize time or spatial correlations by means of linear models. Optimality and local optimality is examined in different ways. For the nested Toeplitz models, it is shown that there does not exist a Uniformly Minimum Variance Quadratic Unbiased Estimator for at least one linear combination of covariance parameters. Moreover, empirical unbiased estimators are identified as Locally Minimum Variance Quadratic Unbiased Estimators for a particular choice on covariance parameters corresponding to the case where the covariance matrix of the observed random vector is proportional to the identity matrix. The complete Toeplitz-circulant model is also studied. For this model, the existence of a Uniformly Minimum Variance Quadratic Unbiased Estimator for each covariance parameter is proved.  相似文献   

18.
Linear mixed‐effects models (LMEMs) of concentration–double‐delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration–ddQTc interval model from a ‘typical’ thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between‐subject variance of the slope, increased the residual variance, and had no effect on the between‐subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation–extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation–extrapolation method could correct biased model parameter estimates to near‐unbiased levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
ABSTRACT

A general theory for a case where a factor has both fixed and random effect levels is developed under one-way treatment structure model. Estimation procedures for the fixed effects and variance components are consider for the model. The testing of fixed effects is considered when the variance–covariance matrix is known and unknown. Confidence intervals for estimable functions and prediction intervals for predictable functions are constructed. The computational procedures are illustrated using data from an on-farm trial.  相似文献   

20.
To build a linear mixed effects model, one needs to specify the random effects and often the associated parametrized covariance matrix structure. Inappropriate specification of the structures can result in the covariance parameters of the model not identifiable. Non-identifiability can result in extraordinary wide confidence intervals, and unreliable parameter inference. Sometimes software produces implication of model non-identifiability, but not always. In the simulation of fitting non-identifiable models we tried, about half of the times the software output did not look abnormal. We derive necessary and sufficient conditions of covariance parameters identifiability which does not require any prior model fitting. The results are easy to implement and are applicable to commonly used covariance matrix structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号