首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The standard methods for analyzing data arising from a ‘thorough QT/QTc study’ are based on multivariate normal models with common variance structure for both drug and placebo. Such modeling assumptions may be violated and when the sample sizes are small, the statistical inference can be sensitive to such stringent assumptions. This article proposes a flexible class of parametric models to address the above‐mentioned limitations of the currently used models. A Bayesian methodology is used for data analysis and models are compared using the deviance information criteria. Superior performance of the proposed models over the current models is illustrated through a real dataset obtained from a GlaxoSmithKline (GSK) conducted ‘thorough QT/QTc study’. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   

3.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

4.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

5.
Linear mixed‐effects models (LMEMs) of concentration–double‐delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration–ddQTc interval model from a ‘typical’ thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between‐subject variance of the slope, increased the residual variance, and had no effect on the between‐subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation–extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation–extrapolation method could correct biased model parameter estimates to near‐unbiased levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
The current guidelines, ICH E14, for the evaluation of non-antiarrhythmic compounds require a 'thorough' QT study (TQT) conducted during clinical development (ICH Guidance for Industry E14, 2005). Owing to the regulatory choice of margin (10 ms), the TQT studies must be conducted to rigorous standards to ensure that variability is minimized. Some of the key sources of variation can be controlled by use of randomization, crossover design, standardization of electrocardiogram (ECG) recording conditions and collection of replicate ECGs at each time point. However, one of the key factors in these studies is the baseline measurement, which if not controlled and consistent across studies could lead to significant misinterpretation. In this article, we examine three types of baseline methods widely used in the TQT studies to derive a change from baseline in QTc (time-matched, time-averaged and pre-dose-averaged baseline). We discuss the impact of the baseline values on the guidance-recommended 'largest time-matched' analyses. Using simulation we have shown the impact of these baseline approaches on the type I error and power for both crossover and parallel group designs. In this article, we show that the power of study decreases as the number of time points tested in TQT study increases. A time-matched baseline method is recommended by several authors (Drug Saf. 2005; 28(2):115-125, Health Canada guidance document: guide for the analysis and review of QT/QTc interval data, 2006) due to the existence of the circadian rhythm in QT. However, the impact of the time-matched baseline method on statistical inference and sample size should be considered carefully during the design of TQT study. The time-averaged baseline had the highest power in comparison with other baseline approaches.  相似文献   

7.
Important progress has been made with model averaging methods over the past decades. For spatial data, however, the idea of model averaging has not been applied well. This article studies model averaging methods for the spatial geostatistical linear model. A spatial Mallows criterion is developed to choose weights for the model averaging estimator. The resulting estimator can achieve asymptotic optimality in terms of L2 loss. Simulation experiments reveal that our proposed estimator is superior to the model averaging estimator by the Mallows criterion developed for ordinary linear models [Hansen, 2007] and the model selection estimator using the corrected Akaike's information criterion, developed for geostatistical linear models [Hoeting et al., 2006]. The Canadian Journal of Statistics 47: 336–351; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
Rong Zhu  Xinyu Zhang 《Statistics》2018,52(1):205-227
The theories and applications of model averaging have been developed comprehensively in the past two decades. In this paper, we consider model averaging for multivariate multiple regression models. In order to make use of the correlation information of the dependent variables sufficiently, we propose a model averaging method based on Mahalanobis distance which is related to the correlation of the dependent variables. We prove the asymptotic optimality of the resulting Mahalanobis Mallows model averaging (MMMA) estimators under certain assumptions. In the simulation study, we show that the proposed MMMA estimators compare favourably with model averaging estimators based on AIC and BIC weights and the Mallows model averaging estimators from the single dependent variable regression models. We further apply our method to the real data on urbanization rate and the proportion of non-agricultural population in ethnic minority areas of China.  相似文献   

9.
Abstract

This paper is concerned with model averaging procedure for varying-coefficient partially linear models. We proposed a jackknife model averaging method that involves minimizing a leave-one-out cross-validation criterion, and developed a computational shortcut to optimize the cross-validation criterion for weight choice. The resulting model average estimator is shown to be asymptotically optimal in terms of achieving the smallest possible squared error. The simulation studies have provided evidence of the superiority of the proposed procedures. Our approach is further applied to a real data.  相似文献   

10.
We study model selection and model averaging in semiparametric partially linear models with missing responses. An imputation method is used to estimate the linear regression coefficients and the nonparametric function. We show that the corresponding estimators of the linear regression coefficients are asymptotically normal. Then a focused information criterion and frequentist model average estimators are proposed and their theoretical properties are established. Simulation studies are performed to demonstrate the superiority of the proposed methods over the existing strategies in terms of mean squared error and coverage probability. Finally, the approach is applied to a real data case.  相似文献   

11.
A diverse range of non‐cardiovascular drugs are associated with QT interval prolongation, which may be associated with a potentially fatal ventricular arrhythmia known as torsade de pointes. QT interval has been assessed for two recent submissions at GlaxoSmithKline. Meta‐analyses of ECG data from several clinical pharmacology studies were conducted for the two submissions. A general fixed effects meta‐analysis approach using summaries of the individual studies was used to calculate a pooled estimate and 90% confidence interval for the difference between each active dose and placebo following both single and repeat dosing separately. The meta‐analysis approach described provided a pragmatic solution to pooling complex and varied studies, and is a good way of addressing regulatory questions on QTc prolongation. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
We study the benefit of exploiting the gene–environment independence (GEI) assumption for inferring the joint effect of genotype and environmental exposure on disease risk in a case–control study. By transforming the problem into a constrained maximum likelihood estimation problem we derive the asymptotic distribution of the maximum likelihood estimator (MLE) under the GEI assumption (MLE‐GEI) in a closed form. Our approach uncovers a transparent explanation of the efficiency gained by exploiting the GEI assumption in more general settings, thus bridging an important gap in the existing literature. Moreover, we propose an easy‐to‐implement numerical algorithm for estimating the model parameters in practice. Finally, we conduct simulation studies to compare the proposed method with the traditional prospective logistic regression method and the case‐only estimator. The Canadian Journal of Statistics 47: 473–486; 2019 © 2019 Statistical Society of Canada  相似文献   

13.
In this paper, we investigate model selection and model averaging based on rank regression. Under mild conditions, we propose a focused information criterion and a frequentist model averaging estimator for the focused parameters in rank regression model. Compared to the least squares method, the new method is not only highly efficient but also robust. The large sample properties of the proposed procedure are established. The finite sample properties are investigated via extensive Monte Claro simulation study. Finally, we use the Boston Housing Price Dataset to illustrate the use of the proposed rank methods.  相似文献   

14.
Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction. This offers similar mean-square errors of prediction but with a vastly reduced predictor space. This can greatly aid the interpretation of the model. It also reduces the cost if measured variables have costs. The development here uses decision theory in the context of the multivariate general linear model. In passing, this reduced predictor space Bayes model averaging is contrasted with single-model approximations. A fast algorithm for updating regressions in the Markov chain Monte Carlo searches for posterior inference is developed, allowing many more variables than observations to be contemplated. We discuss the merits of absolute rather than proportionate shrinkage in regression, especially when there are more variables than observations. The methodology is illustrated on a set of spectroscopic data used for measuring the amounts of different sugars in an aqueous solution.  相似文献   

15.
This paper is concerned with model selection and model averaging procedures for partially linear single-index models. The profile least squares procedure is employed to estimate regression coefficients for the full model and submodels. We show that the estimators for submodels are asymptotically normal. Based on the asymptotic distribution of the estimators, we derive the focused information criterion (FIC), formulate the frequentist model average (FMA) estimators and construct proper confidence intervals for FMA estimators and FIC estimator, a special case of FMA estimators. Monte Carlo studies are performed to demonstrate the superiority of the proposed method over the full model, and over models chosen by AIC or BIC in terms of coverage probability and mean squared error. Our approach is further applied to real data from a male fertility study to explore potential factors related to sperm concentration and estimate the relationship between sperm concentration and monobutyl phthalate.  相似文献   

16.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   

17.
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection.  相似文献   

18.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   

19.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   

20.
A complication that may arise in some bioequivalence studies is that of ‘incomplete subject profiles’, caused by missing values that occur at one or more sampling points in the concentration–time curve for some study subjects. We assess the impact of incomplete subject profiles on the assessment of bioequivalence in a standard two‐period crossover design. The specific aim of the investigation is to assess the impact of four different patterns of missing concentration values on the coverage level of a 90% nominal two‐sided confidence interval for the ratio of geometric means and then to consider the impact on the probability of concluding bioequivalence. An overall conclusion from the results is that random missingness – that is, missingness for reasons unrelated to the bioavailability of the formulation involved or, more generally, to any aspect of the study design and conduct – has a damaging effect on the study conclusions only when the number of missing values is fairly large. On the other hand, a missingness pattern that potentially has a very damaging effect on the study conclusions is that which arises when values are missing ‘late’ in the concentration–time curve. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号