首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Assessing dose-response from flexible-dose clinical trials (e.g., titration or dose escalation studies) is challenging and often problematic due to the selection bias caused by 'titration-to-response'. We investigate the performance of a dynamic linear mixed-effects (DLME) model and marginal structural model (MSM) in evaluating dose-response from flexible-dose titration clinical trials via simulations. The simulation results demonstrated that DLME models with previous exposure as a time-varying covariate may provide an unbiased and efficient estimator to recover exposure-response relationship from flexible-dose clinical trials. Although the MSM models with independent and exchangeable working correlations appeared to be able to recover the right direction of the dose-response relationship, it tended to over-correct selection bias and overestimated the underlying true dose-response. The MSM estimators were also associated with large variability in the parameter estimates. Therefore, DLME may be an appropriate modeling option in identifying dose-response when data from fixed-dose studies are absent or a fixed-dose design is unethical to be implemented.  相似文献   

2.
3.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.  相似文献   

5.
Summary The Value-at-Risk calculation reduces the dimensionality of the risk factor space. The main reasons for such simplifications are, e. g., technical efficiency, the logic and statistical appropriateness of the model. In Chapter 2 we present three simple mappings: the mapping on the market index, the principal components models and the model with equally correlated risk factors. The comparison of these models in Chapter 3 is based on the literature on the verification of weather forecasts (Murphy and Winkler, 1992; Murphy, 1997). Some considerations on the quantitative analysis are presented in the fourth chapter. In the last chapter, we present empirical analysis of the DAX data using XploRe. We acknowlege the support of Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 649 “Economic Risk”, MSM 0021620839 and 1K04018  相似文献   

6.
This article introduces the Markov-Switching Multifractal Duration (MSMD) model by adapting the MSM stochastic volatility model of Calvet and Fisher (2004) to the duration setting. Although the MSMD process is exponential β-mixing as we show in the article, it is capable of generating highly persistent autocorrelation. We study, analytically and by simulation, how this feature of durations generated by the MSMD process propagates to counts and realized volatility. We employ a quasi-maximum likelihood estimator of the MSMD parameters based on the Whittle approximation and establish its strong consistency and asymptotic normality for general MSMD specifications. We show that the Whittle estimation is a computationally simple and fast alternative to maximum likelihood. Finally, we compare the performance of the MSMD model with competing short- and long-memory duration models in an out-of-sample forecasting exercise based on price durations of three major foreign exchange futures contracts. The results of the comparison show that the MSMD and the Long Memory Stochastic Duration model perform similarly and are superior to the short-memory Autoregressive Conditional Duration models.  相似文献   

7.
ABSTRACT

The likelihood of a generalized linear mixed model (GLMM) often involves high-dimensional integrals, which in general cannot be computed explicitly. When direct computation is not available, method of simulated moments (MSM) is a fairly simple way to estimate the parameters of interest. In this research, we compared parametric bootstrap (PB) and nonparametric bootstrap methods (NPB) in estimating the standard errors of MSM estimators for GLMM. Simulation results show that when the group size is large, the PB and NPB perform similarly; when group size is medium, NPB performs better than PB in estimating standard errors of the mean.  相似文献   

8.
Over the past decades, various principles for causal effect estimation have been proposed, all differing in terms of how they adjust for measured confounders: either via traditional regression adjustment, by adjusting for the expected exposure given those confounders (e.g., the propensity score), or by inversely weighting each subject's data by the likelihood of the observed exposure, given those confounders. When the exposure is measured with error, this raises the question whether these different estimation strategies might be differently affected and whether one of them is to be preferred for that reason. In this article, we investigate this by comparing inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators for the exposure effect in linear marginal structural mean models (MSM) with G-estimators, propensity score (PS) adjusted estimators and ordinary least squares (OLS) estimators for the exposure effect in linear regression models. We find analytically that these estimators are equally affected when exposure misclassification is independent of the confounders, but not otherwise. Simulation studies reveal similar results for time-varying exposures and when the model of interest includes a logistic link.  相似文献   

9.
A parametric marginal structural model (PMSM) approach to Causal Inference has been favored since the introduction of MSMs by Robins [1998a. Marginal structural models. In: 1997 Proceedings of the American Statistical Association. American Statistical Association, Alexandria, VA, pp. 1–10]. We propose an alternative, nonparametric MSM (NPMSM) approach that extends the definition of causal parameters of interest and causal effects. This approach is appealing in practice as it does not require correct specification of a parametric model but instead relies on a working model which can be willingly misspecified. We propose a methodology for longitudinal data to generate and estimate so-called NPMSM parameters describing so-called nonparametric causal effects and provide insight on how to interpret these parameters causally in practice. Results are illustrated with a point treatment simulation study. The proposed NPMSM approach to Causal Inference is compared to the more typical PMSM approach and we contribute to the general understanding of PMSM estimation by addressing the issue of PMSM misspecification.  相似文献   

10.
Randomized clinical trials are designed to estimate the direct effect of a treatment by randomly assigning patients to receive either treatment or control. However, in some trials, patients who discontinued their initial randomized treatment are allowed to switch to another treatment. Therefore, the direct treatment effect of interest may be confounded by subsequent treatment. Moreover, the decision on whether to initiate a second‐line treatment is typically made based on time‐dependent factors that may be affected by prior treatment history. Due to these time‐dependent confounders, traditional time‐dependent Cox models may produce biased estimators of the direct treatment effect. Marginal structural models (MSMs) have been applied to estimate causal treatment effects even in the presence of time‐dependent confounders. However, the occurrence of extremely large weights can inflate the variance of the MSM estimators. In this article, we proposed a new method for estimating weights in MSMs by adaptively truncating the longitudinal inverse probabilities. This method provides balance in the bias variance trade‐off when large weights are inevitable, without the ad hoc removal of selected observations. We conducted simulation studies to explore the performance of different methods by comparing bias, standard deviation, confidence interval coverage rates, and mean square error under various scenarios. We also applied these methods to a randomized, open‐label, phase III study of patients with nonsquamous non‐small cell lung cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
We explore the impact of time-varying subsequent therapy on the statistical power and treatment effects in survival analysis. The marginal structural model (MSM) with stabilized inverse probability treatment weights (sIPTW) was used to account for the effects due to the subsequent therapy. Simulations were performed to compare the MSM-sIPTW method with the conventional method without accounting for the time-varying covariate such as subsequent therapy that is dependent on the initial response of the treatment effect. The results of the simulations indicated that the statistical power, thereby the Type I error, of the trials to detect the frontline treatment effect could be inflated if no appropriate adjustment was made for the impact due to the add-on effects of the subsequent therapy. Correspondingly, the hazard ratio between the treatment groups may be overestimated by the conventional analysis methods. In contrast, MSM-sIPTW can maintain the Type I error rate and gave unbiased estimates of the hazard ratio for the treatment. Two real examples were used to discuss the potential clinical implications. The study demonstrated the importance of accounting for time-varying subsequent therapy for obtaining unbiased interpretation of data.  相似文献   

12.
A. Baccini  M. Fekri  J. Fine 《Statistics》2013,47(4):267-300
Different sorts of bilinear models (models with bilinear interaction terms) are currently used when analyzing contingency tables: association models, correlation models... All these can be included in a general family of bilinear models: power models. In this framework, Maximum Likelihood (ML) estimation is not always possible, as explained in an introductory example. Thus, Generalized Least Squares (GLS) estimation is sometimes needed in order to estimate parameters. A subclass of power models is then considered in this paper: separable reduced-rank (SRR) models. They allow an optimal choice of weights for GLS estimation and simplifications in asymptotic studies concerning GLS estimators. Power 2 models belong to the subclass of SRR models and the asymptotic properties of GLS estimators are established. Similar results are also established for association models which are not SRR models. However, these results are more difficult to prove. Finally, 2 examples are considered to illustrate our results.  相似文献   

13.
Recent advances in computing make it practical to use complex hierarchical models. However, the complexity makes it difficult to see how features of the data determine the fitted model. This paper describes an approach to diagnostics for hierarchical models, specifically linear hierarchical models with additive normal or t -errors. The key is to express hierarchical models in the form of ordinary linear models by adding artificial `cases' to the data set corresponding to the higher levels of the hierarchy. The error term of this linear model is not homoscedastic, but its covariance structure is much simpler than that usually used in variance component or random effects models. The re-expression has several advantages. First, it is extremely general, covering dynamic linear models, random effect and mixed effect models, and pairwise difference models, among others. Second, it makes more explicit the geometry of hierarchical models, by analogy with the geometry of linear models. Third, the analogy with linear models provides a rich source of ideas for diagnostics for all the parts of hierarchical models. This paper gives diagnostics to examine candidate added variables, transformations, collinearity, case influence and residuals.  相似文献   

14.
韩本三  曹征  黎实 《统计研究》2012,29(7):81-85
 本文将RESET检验扩展到二元选择面板数据模型的设定,考察了固定效应Probit模型和Logit模型的设定检验,包括异方差、遗漏变量和分布误设的检验。模拟结果表明Logit模型的RESET设定检验显示良好的水平和功效,而Probit模型的RESET检验可能由于估计方法的选择导致在某些方面的功效表现不好。但总体说来,在二元选择面板数据模型的设定检验上,RESET检验仍然是一个较好的选择。  相似文献   

15.
Abstract. Latent variable modelling has gradually become an integral part of mainstream statistics and is currently used for a multitude of applications in different subject areas. Examples of ‘traditional’ latent variable models include latent class models, item–response models, common factor models, structural equation models, mixed or random effects models and covariate measurement error models. Although latent variables have widely different interpretations in different settings, the models have a very similar mathematical structure. This has been the impetus for the formulation of general modelling frameworks which accommodate a wide range of models. Recent developments include multilevel structural equation models with both continuous and discrete latent variables, multiprocess models and nonlinear latent variable models.  相似文献   

16.
基于信用卡邮寄业务响应率分析来讨论Logistic模型和分类树模型在变量选取上的区别,并尝试从几个不同角度去解释两类模型变量筛选差异的原因。笔者认为没有绝对占优势的方法,需要结合具体场景和模型的特点来选择合适的模型。分类树模型在训练集上容易过度拟合,对单个变量的影响很敏感,在进行危险因素分析时结果更能强调危险因素,对孤立点的识别率很高。Logistic模型容易受到解释变量依存关系的影响,加上分类变量的影响容易过多地选入变量或者因子,对孤立点敏感,对噪点不敏感。判别函数的差异是变量筛选差异的关键因素。  相似文献   

17.
The purpose of this paper is to relate a number of multinomial models currently in use for ordinal response data in a unified manner. By studying generalized logit models, proportional generalized odds ratio models and proportional generalized hazard models under different parameterizations, we conclude that there are only four different models and they can be specified genericaUy in a uniform way. These four models all possess the same stochastic ordering property and we compare them graphically in a simple case. Data from the NHLBI TYPE II study (Brensike et al (1984)) is used to illustrate these models. We show that the BMDP programs LE and PR can be employed in computing maximum likelihood estimators for these four models.  相似文献   

18.
This paper considers an alternative to iterative procedures used to calculate maximum likelihood estimates of regression coefficients in a general class of discrete data regression models. These models can include both marginal and conditional models and also local regression models. The classical estimation procedure is generally via a Fisher-scoring algorithm and can be computationally intensive for high-dimensional problems. The alternative method proposed here is non-iterative and is likely to be more efficient in high-dimensional problems. The method is demonstrated on two different classes of regression models.  相似文献   

19.
Three Mixed Proportional Hazard models for estimation of unemployment duration when attrition is present are considered. The virtue of these models is that they take account of dependence between failure times in a multivariate failure time distribution context. However, identification in dependent competing risks models is not straightforward. We show that these models, independently derived, are special cases of a general frailty model. It is also demonstrated that the three models are identified by means of identification of the general model. An empirical example illustrates the approach to model dependent failure times.  相似文献   

20.
Summary.  Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models , where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号