首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level.  相似文献   

2.
This paper investigates the focused information criterion and plug-in average for vector autoregressive models with local-to-zero misspecification. These methods have the advantage of focusing on a quantity of interest rather than aiming at overall model fit. Any (su?ciently regular) function of the parameters can be used as a quantity of interest. We determine the asymptotic properties and elaborate on the role of the locally misspecified parameters. In particular, we show that the inability to consistently estimate locally misspecified parameters translates into suboptimal selection and averaging. We apply this framework to impulse response analysis. A Monte Carlo simulation study supports our claims.  相似文献   

3.
《随机性模型》2013,29(3):293-312
A parallel is made between the role played by covariances in the determination of auto-regressive models and the role played by impulse responses in the determination of ARMA models.

Auto-regressive models are known to maximize the Burg-entropy under covariance constraints. Auto-regressive-moving-average models give the maximum of the Burg-entropy among processes sharing the same covariances and impulse responses up to a certain lag. Such models are constructed by iterative or algebraic methods under the different constraints.

A new recursive method of identification of the order of an ARMA model is also developed, based on the generalized reflection coefficients.  相似文献   

4.
Both knowledge-based systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty.

Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem.  相似文献   

5.
The two experimental methods most commonly used for reducing the effect of noise factors on a response of interest Y aim either to estimate a model of the variability (V(Y), or an associated function), that is transmitted by the noise factors, or to estimate a model of the ratio between the response (Y) and all the control and noise factors involved therein. Both methods aim to determine which control factor conditions minimise the noise factors' effect on the response of interest, and a series of analytical guidelines are established to reach this end. Product array designs allow robustness problems to be solved in both ways, but require a large number of experiments. Thus, practitioners tend to choose more economical designs that only allow them to model the surface response for Y. The general assumption is that both methods would lead to similar conclusions. In this article we present a case that utilises a design based on a product design and for which the conclusions yielded by the two analytical methods are quite different. This example casts doubt on the guidelines that experimental practice follows when using either of the two methods. Based on this example, we show the causes behind these discrepancies and we propose a number of guidelines to help researchers in the design and interpretation of robustness problems when using either of the two methods.  相似文献   

6.
Summary.  We present a general method of adjustment for non-ignorable non-response in studies where one or more further attempts are made to contact initial non-responders. A logistic regression model relates the probability of response at each contact attempt to covariates and outcomes of interest. We assume that the effect of these covariates and outcomes on the probability of response is the same at all contact attempts. Knowledge of the number of contact attempts enables estimation of the model by using only information from the respondents and the number of non-responders. Three approaches for fitting the response models and estimating parameters of substantive interest and their standard errors are compared: a modified conditional likelihood method in which the fitted inverse probabilities of response are used in weighted analyses for the outcomes of interest, an EM procedure with the Louis formula and a Bayesian approach using Markov chain Monte Carlo methods. We further propose the creation of several sets of weights to incorporate uncertainty in the probability weights in subsequent analyses. Our methods are applied as a sensitivity analysis to a postal survey of symptoms in Persian Gulf War veterans and other servicemen.  相似文献   

7.
It is well known that M-estimation is a widely used method for robust statistical inference and the varying coefficient models have been widely applied in many scientific areas. In this paper, we consider M-estimation and model identification of bivariate varying coefficient models for longitudinal data. We make use of bivariate tensor-product B-splines as an approximation of the function and consider M-type regression splines by minimizing the objective convex function. Mean and median regressions are included in this class. Moreover, with a double smoothly clipped absolute deviation (SCAD) penalization, we study the problem of simultaneous structure identification and estimation. Under approximate conditions, we show that the proposed procedure possesses the oracle property in the sense that it is as efficient as the estimator when the true model is known prior to statistical analysis. Simulation studies are carried out to demonstrate the methodological power of the proposed methods with finite samples. The proposed methodology is illustrated with an analysis of a real data example.  相似文献   

8.
In an earlier paper we suggested a method for the identification and estimation of linear transfer function models. The method was claimed to be especially suitable for polynomial transfer function models. In this paper we shall consider the case of rational transfer function models (distributed lag models) in more detail. A simple method for the estimation of the parameters of multiple input rational distributed lag models is suggested. The method is based on simple linear identities that the parameters always fulfill. The asymptotic distribution of the proposed estimator is derived. Two illustrative examples of the use of the new method are given.  相似文献   

9.
Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.  相似文献   

10.
Varying-coefficient models have been widely used to investigate the possible time-dependent effects of covariates when the response variable comes from normal distribution. Much progress has been made for inference and variable selection in the framework of such models. However, the identification of model structure, that is how to identify which covariates have time-varying effects and which have fixed effects, remains a challenging and unsolved problem especially when the dimension of covariates is much larger than the sample size. In this article, we consider the structural identification and variable selection problems in varying-coefficient models for high-dimensional data. Using a modified basis expansion approach and group variable selection methods, we propose a unified procedure to simultaneously identify the model structure, select important variables and estimate the coefficient curves. The unique feature of the proposed approach is that we do not have to specify the model structure in advance, therefore, it is more realistic and appropriate for real data analysis. Asymptotic properties of the proposed estimators have been derived under regular conditions. Furthermore, we evaluate the finite sample performance of the proposed methods with Monte Carlo simulation studies and a real data analysis.  相似文献   

11.
In the analysis of time‐to‐event data, competing risks occur when multiple event types are possible, and the occurrence of a competing event precludes the occurrence of the event of interest. In this situation, statistical methods that ignore competing risks can result in biased inference regarding the event of interest. We review the mechanisms that lead to bias and describe several statistical methods that have been proposed to avoid bias by formally accounting for competing risks in the analyses of the event of interest. Through simulation, we illustrate that Gray's test should be used in lieu of the logrank test for nonparametric hypothesis testing. We also compare the two most popular models for semiparametric modelling: the cause‐specific hazards (CSH) model and Fine‐Gray (F‐G) model. We explain how to interpret estimates obtained from each model and identify conditions under which the estimates of the hazard ratio and subhazard ratio differ numerically. Finally, we evaluate several model diagnostic methods with respect to their sensitivity to detect lack of fit when the CSH model holds, but the F‐G model is misspecified and vice versa. Our results illustrate that adequacy of model fit can strongly impact the validity of statistical inference. We recommend analysts incorporate a model diagnostic procedure and contingency to explore other appropriate models when designing trials in which competing risks are anticipated.  相似文献   

12.
动态随机一般均衡模型中涵盖无法直接观测的变量,同时跨方程约束涉及复杂的非线性关系使方程的解析估计难以实现。在贝叶斯框架下识别动态随机一般均衡模型,基于状态空间方法建立度量方程和状态转移方程,采用辅助粒子滤波预测条件后验分布,建立贝叶斯误差带描述宏观经济变量脉冲响应函数的动态特征。实际数据分析验证了贝叶斯识别方法的有效性。  相似文献   

13.
We address the task of choosing prior weights for models that are to be used for weighted model averaging. Models that are very similar should usually be given smaller weights than models that are quite distinct. Otherwise, the importance of a model in the weighted average could be increased by augmenting the set of models with duplicates of the model or virtual duplicates of it. Similarly, the importance of a particular model feature (a certain covariate, say) could be exaggerated by including many models with that feature. Ways of forming a correlation matrix that reflects the similarity between models are suggested. Then, weighting schemes are proposed that assign prior weights to models on the basis of this matrix. The weighting schemes give smaller weights to models that are more highly correlated. Other desirable properties of a weighting scheme are identified, and we examine the extent to which these properties are held by the proposed methods. The weighting schemes are applied to real data, and prior weights, posterior weights and Bayesian model averages are determined. For these data, empirical Bayes methods were used to form the correlation matrices that yield the prior weights. Predictive variances are examined, as empirical Bayes methods can result in unrealistically small variances.  相似文献   

14.
Bayesian estimation via MCMC methods opens up new possibilities in estimating complex models. However, there is still considerable debate about how selection among a set of candidate models, or averaging over closely competing models, might be undertaken. This article considers simple approaches for model averaging and choice using predictive and likelihood criteria and associated model weights on the basis of output for models that run in parallel. The operation of such procedures is illustrated with real data sets and a linear regression with simulated data where the true model is known.  相似文献   

15.
Three Mixed Proportional Hazard models for estimation of unemployment duration when attrition is present are considered. The virtue of these models is that they take account of dependence between failure times in a multivariate failure time distribution context. However, identification in dependent competing risks models is not straightforward. We show that these models, independently derived, are special cases of a general frailty model. It is also demonstrated that the three models are identified by means of identification of the general model. An empirical example illustrates the approach to model dependent failure times.  相似文献   

16.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   

17.
A method based on pseudo-observations has been proposed for direct regression modeling of functionals of interest with right-censored data, including the survival function, the restricted mean and the cumulative incidence function in competing risks. The models, once the pseudo-observations have been computed, can be fitted using standard generalized estimating equation software. Regression models can however yield problematic results if the number of covariates is large in relation to the number of events observed. Guidelines of events per variable are often used in practice. These rules of thumb for the number of events per variable have primarily been established based on simulation studies for the logistic regression model and Cox regression model. In this paper we conduct a simulation study to examine the small sample behavior of the pseudo-observation method to estimate risk differences and relative risks for right-censored data. We investigate how coverage probabilities and relative bias of the pseudo-observation estimator interact with sample size, number of variables and average number of events per variable.  相似文献   

18.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

19.
变权重组合预测模型的局部加权最小二乘解法   总被引:2,自引:0,他引:2  
随着科学技术的不断进步,预测方法也得到了很大的发展,常见的预测方法就有数十种之多。而组合预测是将不同的预测方法组合起来,综合利用各个方法所提供的信息,其效果往往优于单一的预测方法,故得到了广泛的应用。而基于变系数模型的思想研究了组合预测模型,将变权重的求取转化为变系数模型中系数函数的估计问题,从而可以基于局部加权最小二乘方法求解,利用交叉证实法选取光滑参数。其结果表明所提方法预测精度很高,效果优于其他方法。  相似文献   

20.
Competing risks are common in clinical cancer research, as patients are subject to multiple potential failure outcomes, such as death from the cancer itself or from complications arising from the disease. In the analysis of competing risks, several regression methods are available for the evaluation of the relationship between covariates and cause-specific failures, many of which are based on Cox’s proportional hazards model. Although a great deal of research has been conducted on estimating competing risks, less attention has been devoted to linear regression modeling, which is often referred to as the accelerated failure time (AFT) model in survival literature. In this article, we address the use and interpretation of linear regression analysis with regard to the competing risks problem. We introduce two types of AFT modeling framework, where the influence of a covariate can be evaluated in relation to either a cause-specific hazard function, referred to as cause-specific AFT (CS-AFT) modeling in this study, or the cumulative incidence function of a particular failure type, referred to as crude-risk AFT (CR-AFT) modeling. Simulation studies illustrate that, as in hazard-based competing risks analysis, these two models can produce substantially different effects, depending on the relationship between the covariates and both the failure type of principal interest and competing failure types. We apply the AFT methods to data from non-Hodgkin lymphoma patients, where the dataset is characterized by two competing events, disease relapse and death without relapse, and non-proportionality. We demonstrate how the data can be analyzed and interpreted, using linear competing risks regression models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号