首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
The age-specific reference interval is an important screening tool in medicine. Put crudely, an individual whose value of a variable of interest lies outside certain extreme centiles may be suspected of abnormality. We propose a parametric method for constructing such intervals. It provides smooth centile curves and explicit formulae for the centile estimates and for standard deviation (SD) scores (age-standardized values). Each parameter of an exponential–normal or modulus–exponential–normal density is modelled as a fractional polynomial function of age. Estimation is by maximum likelihood. These three- and four-parameter models involve transformations of the data towards normality which remove non-normal skewness and/or kurtosis. Fractional polynomials provide more flexible curve shapes than do conventional polynomials. The method easily accommodates binary covariates facilitating, for example, parsimonious modelling of age- and sex-specific centile curves. A method of calculating precision profiles for centile estimates is proposed. Goodness of fit is assessed by using Q–Q -plots and Shapiro–Wilk W -tests of the SD scores, and likelihood ratio tests of the parameters of an enlarged model. Four substantial real data sets are used to illustrate the method. Comparisons are made with the semiparametric LMS method of Cole and Green.  相似文献   

2.
In this study, an evaluation of Bayesian hierarchical models is made based on simulation scenarios to compare single-stage and multi-stage Bayesian estimations. Simulated datasets of lung cancer disease counts for men aged 65 and older across 44 wards in the London Health Authority were analysed using a range of spatially structured random effect components. The goals of this study are to determine which of these single-stage models perform best given a certain simulating model, how estimation methods (single- vs. multi-stage) compare in yielding posterior estimates of fixed effects in the presence of spatially structured random effects, and finally which of two spatial prior models – the Leroux or ICAR model, perform best in a multi-stage context under different assumptions concerning spatial correlation. Among the fitted single-stage models without covariates, we found that when there is low amount of variability in the distribution of disease counts, the BYM model is relatively robust to misspecification in terms of DIC, while the Leroux model is the least robust to misspecification. When these models were fit to data generated from models with covariates, we found that when there was one set of covariates – either spatially correlated or non-spatially correlated, changing the values of the fixed coefficients affected the ability of either the Leroux or ICAR model to fit the data well in terms of DIC. When there were multiple sets of spatially correlated covariates in the simulating model, however, we could not distinguish the goodness of fit to the data between these single-stage models. We found that the multi-stage modelling process via the Leroux and ICAR models generally reduced the variance of the posterior estimated fixed effects for data generated from models with covariates and a UH term compared to analogous single-stage models. Finally, we found the multi-stage Leroux model compares favourably to the multi-stage ICAR model in terms of DIC. We conclude that the mutli-stage Leroux model should be seriously considered in applications of Bayesian disease mapping when an investigator desires to fit a model with both fixed effects and spatially structured random effects to Poisson count data.  相似文献   

3.
We consider mixed effects models for longitudinal, repeated measures or clustered data. Unmeasured or omitted covariates in such models may be correlated with the included covanates, and create model violations when not taken into account. Previous research and experience with longitudinal data sets suggest a general form of model which should be considered when omitted covariates are likely, such as in observational studies. We derive the marginal model between the response variable and included covariates, and consider model fitting using the ordinary and weighted least squares methods, which require simple non-iterative computation and no assumptions on the distribution of random covariates or error terms, Asymptotic properties of the least squares estimators are also discussed. The results shed light on the structure of least squares estimators in mixed effects models, and provide large sample procedures for statistical inference and prediction based on the marginal model. We present an example of the relationship between fluid intake and output in very low birth weight infants, where the model is found to have the assumed structure.  相似文献   

4.
Most regression problems in practice require flexible semiparametric forms of the predictor for modelling the dependence of responses on covariates. Moreover, it is often necessary to add random effects accounting for overdispersion caused by unobserved heterogeneity or for correlation in longitudinal or spatial data. We present a unified approach for Bayesian inference via Markov chain Monte Carlo simulation in generalized additive and semiparametric mixed models. Different types of covariates, such as the usual covariates with fixed effects, metrical covariates with non-linear effects, unstructured random effects, trend and seasonal components in longitudinal data and spatial covariates, are all treated within the same general framework by assigning appropriate Markov random field priors with different forms and degrees of smoothness. We applied the approach in several case-studies and consulting cases, showing that the methods are also computationally feasible in problems with many covariates and large data sets. In this paper, we choose two typical applications.  相似文献   

5.
Regression methods typically construct a mapping from the covariates into the real numbers. Here, however, we consider regression problems where the task is to form a mapping from the covariates into a set of (univariate) real-valued functions. Examples are given by conditional density estimation, hazard regression and regression with a functional response. Our approach starts by modeling the function of interest using a sum of B-spline basis functions. To model dependence on the covariates, the coefficients of this expansion are each modeled as functions of the covariates. We propose to estimate these coefficient functions using boosted tree models. Algorithms are provided for the above three situations, and real data sets are used to investigate their performance. The results indicate that the proposed methodology performs well. In addition, it is both straightforward, and capable of handling a large number of covariates.  相似文献   

6.

Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed in the literature which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject’s event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose a Bayesian model which loosens assumptions on the mixing distribution inherent in the random effects FHT models currently in use. We demonstrate via simulation study that the proposed model greatly improves both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodology to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with both the Cox model and two popular FHT models.

  相似文献   

7.
Semi parametric methods provide estimates of finite parameter vectors without requiring that the complete data generation process be assumed in a finite-dimensional family. By avoiding bias from incorrect specification, such estimators gain robustness, although usually at the cost of decreased precision. The most familiar semi parametric method in econometrics is ordi¬nary least squares, which estimates the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family. The recent literature in econometric theory has extended semi parametric methods to a variety of non-linear models, including models appropriate for analysis of censored duration data. Horowitz and Newman make perhaps the first empirical application of these methods, to data on employment duration. Their analysis provides insights into the practical problems of implementing these methods, and limited information on performance. Their data set, containing 226 male controls from the Denver income maintenance experiment in 1971-74, does not show any significant covariates (except race), even when a fully parametric model is assumed. Consequently, the authors are unable to reject the fully parametric model in a test against the alternative semi parametric estimators. This provides some negative, but tenuous, evidence that in practical applications the reduction in bias using semi parametric estimators is insufficient to offset loss in precision. Larger samples, and data sets with strongly significant covariates, will need to be interval, and if the observation period is long enough will eventually be more loyal on average for those starting employment spells near the end of the observation period.  相似文献   

8.
This paper sets out to implement the Bayesian paradigm for fractional polynomial models under the assumption of normally distributed error terms. Fractional polynomials widen the class of ordinary polynomials and offer an additive and transportable modelling approach. The methodology is based on a Bayesian linear model with a quasi-default hyper-g prior and combines variable selection with parametric modelling of additive effects. A Markov chain Monte Carlo algorithm for the exploration of the model space is presented. This theoretically well-founded stochastic search constitutes a substantial improvement to ad hoc stepwise procedures for the fitting of fractional polynomial models. The method is applied to a data set on the relationship between ozone levels and meteorological parameters, previously analysed in the literature.  相似文献   

9.
In this paper, we consider James–Stein shrinkage and pretest estimation methods for time series following generalized linear models when it is conjectured that some of the regression parameters may be restricted to a subspace. Efficient estimation strategies are developed when there are many covariates in the model and some of them are not statistically significant. Statistical properties of the pretest and shrinkage estimation methods including asymptotic distributional bias and risk are developed. We investigate the relative performances of shrinkage and pretest estimators with respect to the unrestricted maximum partial likelihood estimator (MPLE). We show that the shrinkage estimators have a lower relative mean squared error as compared to the unrestricted MPLE when the number of significant covariates exceeds two. Monte Carlo simulation experiments were conducted for different combinations of inactive covariates and the performance of each estimator was evaluated in terms of its mean squared error. The practical benefits of the proposed methods are illustrated using two real data sets.  相似文献   

10.
When confronted with multiple covariates and a response variable, analysts sometimes apply a variable‐selection algorithm to the covariate‐response data to identify a subset of covariates potentially associated with the response, and then wish to make inferences about parameters in a model for the marginal association between the selected covariates and the response. If an independent data set were available, the parameters of interest could be estimated by using standard inference methods to fit the postulated marginal model to the independent data set. However, when applied to the same data set used by the variable selector, standard (“naive”) methods can lead to distorted inferences. The authors develop testing and interval estimation methods for parameters reflecting the marginal association between the selected covariates and response variable, based on the same data set used for variable selection. They provide theoretical justification for the proposed methods, present results to guide their implementation, and use simulations to assess and compare their performance to a sample‐splitting approach. The methods are illustrated with data from a recent AIDS study. The Canadian Journal of Statistics 37: 625–644; 2009 © 2009 Statistical Society of Canada  相似文献   

11.
This article studies the absolute penalized convex function estimator in sparse and high-dimensional additive hazards model. Under such model, we assume that the failure time data are interval-censored and the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on some natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true parameters in the model. Some similar inequalities based on an extension of the restricted eigenvalue are also established. Under mild conditions, we prove that the compatibility and cone invertibility factors and the restricted eigenvalues are bounded from below by positive constants for time-dependent covariates.  相似文献   

12.
In this article, we highlight some interesting facts about Bayesian variable selection methods for linear regression models in settings where the design matrix exhibits strong collinearity. We first demonstrate via real data analysis and simulation studies that summaries of the posterior distribution based on marginal and joint distributions may give conflicting results for assessing the importance of strongly correlated covariates. The natural question is which one should be used in practice. The simulation studies suggest that posterior inclusion probabilities and Bayes factors that evaluate the importance of correlated covariates jointly are more appropriate, and some priors may be more adversely affected in such a setting. To obtain a better understanding behind the phenomenon, we study some toy examples with Zellner’s g-prior. The results show that strong collinearity may lead to a multimodal posterior distribution over models, in which joint summaries are more appropriate than marginal summaries. Thus, we recommend a routine examination of the correlation matrix and calculation of the joint inclusion probabilities for correlated covariates, in addition to marginal inclusion probabilities, for assessing the importance of covariates in Bayesian variable selection.  相似文献   

13.
Mixed models are powerful tools for the analysis of clustered data and many extensions of the classical linear mixed model with normally distributed response have been established. As with all parametric (P) models, correctness of the assumed model is critical for the validity of the ensuing inference. An incorrectly specified P means model may be improved by using a local, or nonparametric (NP), model. Two local models are proposed by a pointwise weighting of the marginal and conditional variance–covariance matrices. However, NP models tend to fit to irregularities in the data and may provide fits with high variance. Model robust regression techniques estimate mean response as a convex combination of a P and a NP model fit to the data. It is a semiparametric method by which incomplete or incorrectly specified P models can be improved by adding an appropriate amount of the NP fit. We compare the approximate integrated mean square error of the P, NP, and mixed model robust methods via a simulation study and apply these methods to two real data sets: the monthly wind speed data from countries in Ireland and the engine speed data.  相似文献   

14.
We discuss the use of latent variable models with observed covariates for computing response propensities for sample respondents. A response propensity score is often used to weight item and unit responders to account for item and unit non-response and to obtain adjusted means and proportions. In the context of attitude scaling, we discuss computing response propensity scores by using latent variable models for binary or nominal polytomous manifest items with covariates. Our models allow the response propensity scores to be found for several different items without refitting. They allow any pattern of missing responses for the items. If one prefers, it is possible to estimate population proportions directly from the latent variable models, so avoiding the use of propensity scores. Artificial data sets and a real data set extracted from the 1996 British Social Attitudes Survey are used to compare the various methods proposed.  相似文献   

15.
An algorithm is presented for calculating the power for the logistic and proportional hazards models in which some of the covariates are discrete and the remainders are multivariate normal. The mean and covariance matrix of the multivariate normal covariates may depend on the discrete covariates.

The algorithm, which finds the power of the Wald test, uses the result that the information matrix can be calculated using univariate numerical integration even when there are several continuous covariates. The algorithm is checked using simulation and in certain situations gives more accurate results than current methods which are based on simple formulae. The algorithm is used to explore properties of these models, in particular, the power gain from a prognostic covariate in the analysis of a clinical trial or observational study. The methods can be extended to determine power for other generalized linear models.  相似文献   

16.
Abstract. We propose a Bayesian semiparametric methodology for quantile regression modelling. In particular, working with parametric quantile regression functions, we develop Dirichlet process mixture models for the error distribution in an additive quantile regression formulation. The proposed non‐parametric prior probability models allow the shape of the error density to adapt to the data and thus provide more reliable predictive inference than models based on parametric error distributions. We consider extensions to quantile regression for data sets that include censored observations. Moreover, we employ dependent Dirichlet processes to develop quantile regression models that allow the error distribution to change non‐parametrically with the covariates. Posterior inference is implemented using Markov chain Monte Carlo methods. We assess and compare the performance of our models using both simulated and real data sets.  相似文献   

17.
Survival models with continuous-time data are still superior methods of survival analysis. However when the survival data is discrete, taking it as continuous leads the researchers to incorrect results and interpretations. The discrete-time survival model has some advantages in applications such as it can be used for non-proportional hazards, time-varying covariates and tied observations. However, it has a disadvantage about the reconstruction of the survival data and working with big data sets. Actuaries are often rely on complex and big data whereas they have to be quick and efficient for short period analysis. Using the mass always creates inefficient processes and consumes time. Therefore sampling design becomes more and more important in order to get reliable results. In this study, we take into account sampling methods in discrete-time survival model using a real data set on motor insurance. To see the efficiency of the proposed methodology we conducted a simulation study.  相似文献   

18.
Variable selection is an effective methodology for dealing with models with numerous covariates. We consider the methods of variable selection for semiparametric Cox proportional hazards model under the progressive Type-II censoring scheme. The Cox proportional hazards model is used to model the influence coefficients of the environmental covariates. By applying Breslow’s “least information” idea, we obtain a profile likelihood function to estimate the coefficients. Lasso-type penalized profile likelihood estimation as well as stepwise variable selection method are explored as means to find the important covariates. Numerical simulations are conducted and Veteran’s Administration Lung Cancer data are exploited to evaluate the performance of the proposed method.  相似文献   

19.
Several survival regression models have been developed to assess the effects of covariates on failure times. In various settings, including surveys, clinical trials and epidemiological studies, missing data may often occur due to incomplete covariate data. Most existing methods for lifetime data are based on the assumption of missing at random (MAR) covariates. However, in many substantive applications, it is important to assess the sensitivity of key model inferences to the MAR assumption. The index of sensitivity to non-ignorability (ISNI) is a local sensitivity tool to measure the potential sensitivity of key model parameters to small departures from the ignorability assumption, needless of estimating a complicated non-ignorable model. We extend this sensitivity index to evaluate the impact of a covariate that is potentially missing, not at random in survival analysis, using parametric survival models. The approach will be applied to investigate the impact of missing tumor grade on post-surgical mortality outcomes in individuals with pancreas-head cancer in the Surveillance, Epidemiology, and End Results data set. For patients suffering from cancer, tumor grade is an important risk factor. Many individuals in these data with pancreas-head cancer have missing tumor grade information. Our ISNI analysis shows that the magnitude of effect for most covariates (with significant effect on the survival time distribution), specifically surgery and tumor grade as some important risk factors in cancer studies, highly depends on the missing mechanism assumption of the tumor grade. Also a simulation study is conducted to evaluate the performance of the proposed index in detecting sensitivity of key model parameters.  相似文献   

20.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号