首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Information from multiple informants is frequently used to assess psychopathology. We consider marginal regression models with multiple informants as discrete predictors and a time to event outcome. We fit these models to data from the Stirling County Study; specifically, the models predict mortality from self report of psychiatric disorders and also predict mortality from physician report of psychiatric disorders. Previously, Horton et al. found little relationship between self and physician reports of psychopathology, but that the relationship of self report of psychopathology with mortality was similar to that of physician report of psychopathology with mortality. Generalized estimating equations (GEE) have been used to fit marginal models with multiple informant covariates; here we develop a maximum likelihood (ML) approach and show how it relates to the GEE approach. In a simple setting using a saturated model, the ML approach can be constructed to provide estimates that match those found using GEE. We extend the ML technique to consider multiple informant predictors with missingness and compare the method to using inverse probability weighted (IPW) GEE. Our simulation study illustrates that IPW GEE loses little efficiency compared with ML in the presence of monotone missingness. Our example data has non-monotone missingness; in this case, ML offers a modest decrease in variance compared with IPW GEE, particularly for estimating covariates in the marginal models. In more general settings, e.g., categorical predictors and piecewise exponential models, the likelihood parameters from the ML technique do not have the same interpretation as the GEE. Thus, the GEE is recommended to fit marginal models for its flexibility, ease of interpretation and comparable efficiency to ML in the presence of missing data.  相似文献   

2.
In this study, we propose a multivariate stochastic model for Web site visit duration, page views, purchase incidence, and the sale amount for online retailers. The model is constructed by composition from carefully selected distributions and involves copula components. It allows for the strong nonlinear relationships between the sales and visit variables to be explored in detail, and can be used to construct sales predictions. The model is readily estimated using maximum likelihood, making it an attractive choice in practice given the large sample sizes that are commonplace in online retail studies. We examine a number of top-ranked U.S. online retailers, and find that the visit duration and the number of pages viewed are both related to sales, but in very different ways for different products. Using Bayesian methodology, we show how the model can be extended to a finite mixture model to account for consumer heterogeneity via latent household segmentation. The model can also be adjusted to accommodate a more accurate analysis of online retailers like apple.com that sell products at a very limited number of price points. In a validation study across a range of different Web sites, we find that the purchase incidence and sales amount are both forecast more accurately using our model, when compared to regression, probit regression, a popular data-mining method, and a survival model employed previously in an online retail study. Supplementary materials for this article are available online.  相似文献   

3.
To predict stock market behaviors, we use a factor-augmented predictive regression with shrinkage to incorporate the information available across literally thousands of financial and economic variables. The system is constructed in terms of both expected returns and the tails of the return distribution. We develop the variable selection consistency and asymptotic normality of the estimator. To select the regularization parameter, we employ the prediction error, with the aim of predicting the behavior of the stock market. Through analysis of the Tokyo Stock Exchange, we find that a large number of variables provide useful information for predicting stock market behaviors.  相似文献   

4.
基于信用卡邮寄业务响应率分析来讨论Logistic模型和分类树模型在变量选取上的区别,并尝试从几个不同角度去解释两类模型变量筛选差异的原因。笔者认为没有绝对占优势的方法,需要结合具体场景和模型的特点来选择合适的模型。分类树模型在训练集上容易过度拟合,对单个变量的影响很敏感,在进行危险因素分析时结果更能强调危险因素,对孤立点的识别率很高。Logistic模型容易受到解释变量依存关系的影响,加上分类变量的影响容易过多地选入变量或者因子,对孤立点敏感,对噪点不敏感。判别函数的差异是变量筛选差异的关键因素。  相似文献   

5.
ABSTRACT

Multivariate Fay-Herriot (MFH) models become popular methods to produce reliable parameter estimates of some related multiple characteristics of interest that are commonly produced from many surveys. This article studies the application of MFH models for estimating household consumption per capita expenditure (HCPE) on food and HCPE of non-food. Both of those associated direct estimates, which are obtained from the National Socioeconomic Surveys conducted regularly by Statistics Indonesia, have a strong correlation. The effects of correlation in MFH models are evaluated by employing a simulation study. The simulation showed that the strength of correlation between variables of interest, instead of the number of domains, plays a prominent role in MFH models. The application showed that MFH models have more efficient than univariate models in terms of standard errors of regression parameter estimates. The roots of mean squared errors (RMSEs) of the estimates obtained from the empirical best linear unbiased prediction (EBLUP) estimators of MFH models are smaller than RMSEs obtained from the direct estimators. Based on MFH model, the HCPE estimates of food by districts in Central Java, Indonesia, are higher than the HCPE estimates of non-food. The average of HCPE estimates of food and non-food in Central Java, Indonesia in 2015 are IDR 383,100.6 and IDR 280,653.6, respectively.  相似文献   

6.
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performance of a newly proposed imputation method based on generalized additive models for location, scale, and shape (GAMLSS) is investigated. Although imputation methods based on predictive mean matching are virtually unbiased, they suffer from mild to moderate under-coverage, even in the experiment where all variables are jointly normal distributed. The GAMLSS method features better coverage than currently available methods.  相似文献   

7.
Peanut allergy is one of the most prevalent food allergies. The possibility of a lethal accidental exposure and the persistence of the disease make it a public health problem. Evaluating the intensity of symptoms is accomplished with a double blind placebo-controlled food challenge (DBPCFC), which scores the severity of reactions and measures the dose of peanut that elicits the first reaction. Since DBPCFC can result in life-threatening responses, we propose an alternate procedure with the long-term goal of replacing invasive allergy tests. Discriminant analyses of DBPCFC score, the eliciting dose and the first accidental exposure score were performed in 76 allergic patients using 6 immunoassays and 28 skin prick tests. A multiple factorial analysis was performed to assign equal weights to both groups of variables, and predictive models were built by cross-validation with linear discriminant analysis, k-nearest neighbours, classification and regression trees, penalized support vector machine, stepwise logistic regression and AdaBoost methods. We developed an algorithm for simultaneously clustering eliciting dose values and selecting discriminant variables. Our main conclusion is that antibody measurements offer information on the allergy severity, especially those directed against rAra-h1 and rAra-h3. Further independent validation of these results and the use of new predictors will help extend this study to clinical practices.  相似文献   

8.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   

9.
Abstract. In geophysical and environmental problems, it is common to have multiple variables of interest measured at the same location and time. These multiple variables typically have dependence over space (and/or time). As a consequence, there is a growing interest in developing models for multivariate spatial processes, in particular, the cross‐covariance models. On the other hand, many data sets these days cover a large portion of the Earth such as satellite data, which require valid covariance models on a globe. We present a class of parametric covariance models for multivariate processes on a globe. The covariance models are flexible in capturing non‐stationarity in the data yet computationally feasible and require moderate numbers of parameters. We apply our covariance model to surface temperature and precipitation data from an NCAR climate model output. We compare our model to the multivariate version of the Matérn cross‐covariance function and models based on coregionalization and demonstrate the superior performance of our model in terms of AIC (and/or maximum loglikelihood values) and predictive skill. We also present some challenges in modelling the cross‐covariance structure of the temperature and precipitation data. Based on the fitted results using full data, we give the estimated cross‐correlation structure between the two variables.  相似文献   

10.
ABSTRACT

The application of conventional statistical methods to directional data generally produces erroneous results. Various regression models for a circular response have been presented in the literature, however these are unsatisfactory either in the limited relationships that can be modeled, or the limitations on the number or type of covariates admissible. One difficulty with circular regression is devising a meaningful regression function. This problem is exacerbated when trying to incorporate both linear and circular variables as covariates. Due to these complexities, circular regression is ripe for exploration via tree-based methods, in which a formal regression function is not needed, but where insight into the general structure and relationship between predictors and the response may be obtained. A basic framework for regression trees, predicting a circular response from a combination of circular and linear predictors, will be presented.  相似文献   

11.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

12.
In a regression or classification setting where we wish to predict Y from x1,x2,..., xp, we suppose that an additional set of coaching variables z1,z2,..., zm are available in our training sample. These might be variables that are difficult to measure, and they will not be available when we predict Y from x1,x2,..., xp in the future. We consider two methods of making use of the coaching variables in order to improve the prediction of Y from x1,x2,..., xp. The relative merits of these approaches are discussed and compared in a number of examples.  相似文献   

13.
Generalized linear models are well-established generalizations of the linear models used for regression and analysis of variance. They allow flexible mean structures and general distributions, other than the linear link and normal response assumed in regression. Further enhancements using ideas from multivariate analysis improve power and precision by modelling dependencies between response variables. This paper focuses on the specific case of regression models for bivariate Bernoulli responses and investigates their analysis using a Bayesian approach. The important problem of renal arterial obstruction is considered, as a medical application of these models.  相似文献   

14.
Quantile regression models are a powerful tool for studying different points of the conditional distribution of univariate response variables. Their multivariate counterpart extension though is not straightforward, starting with the definition of multivariate quantiles. We propose here a flexible Bayesian quantile regression model when the response variable is multivariate, where we are able to define a structured additive framework for all predictor variables. We build on previous ideas considering a directional approach to define the quantiles of a response variable with multiple outputs, and we define noncrossing quantiles in every directional quantile model. We define a Markov chain Monte Carlo (MCMC) procedure for model estimation, where the noncrossing property is obtained considering a Gaussian process design to model the correlation between several quantile regression models. We illustrate the results of these models using two datasets: one on dimensions of inequality in the population, such as income and health; the second on scores of students in the Brazilian High School National Exam, considering three dimensions for the response variable.  相似文献   

15.
The partial least squares (PLS) approach first constructs new explanatory variables, known as factors (or components), which are linear combinations of available predictor variables. A small subset of these factors is then chosen and retained for prediction. We study the performance of PLS in estimating single-index models, especially when the predictor variables exhibit high collinearity. We show that PLS estimates are consistent up to a constant of proportionality. We present three simulation studies that compare the performance of PLS in estimating single-index models with that of sliced inverse regression (SIR). In the first two studies, we find that PLS performs better than SIR when collinearity exists. In the third study, we learn that PLS performs well even when there are multiple dependent variables, the link function is non-linear and the shape of the functional form is not known.  相似文献   

16.
We discuss maximum likelihood and estimating equations methods for combining results from multiple studies in pooling projects and data consortia using a meta-analysis model, when the multivariate estimates with their covariance matrices are available. The estimates to be combined are typically regression slopes, often from relative risk models in biomedical and epidemiologic applications. We generalize the existing univariate meta-analysis model and investigate the efficiency advantages of the multivariate methods, relative to the univariate ones. We generalize a popular univariate test for between-studies homogeneity to a multivariate test. The methods are applied to a pooled analysis of type of carotenoids in relation to lung cancer incidence from seven prospective studies. In these data, the expected gain in efficiency was evident, sometimes to a large extent. Finally, we study the finite sample properties of the estimators and compare the multivariate ones to their univariate counterparts.  相似文献   

17.
Running complex computer models can be expensive in computer time, while learning about the relationships between input and output variables can be difficult. An emulator is a fast approximation to a computationally expensive model that can be used as a surrogate for the model, to quantify uncertainty or to improve process understanding. Here, we examine emulators based on singular value decompositions (SVDs) and use them to emulate global climate and vegetation fields, examining how these fields are affected by changes in the Earth's orbit. The vegetation field may be emulated directly from the orbital variables, but an appealing alternative is to relate it to emulations of the climate fields, which involves high-dimensional input and output. The SVDs radically reduce the dimensionality of the input and output spaces and are shown to clarify the relationships between them. The method could potentially be useful for any complex process with correlated, high-dimensional inputs and/or outputs.  相似文献   

18.
Nonlinear regression models arise when definite information is available about the form of the relationship between the response and predictor variables. Such information might involve direct knowledge of the actual form of the true model or might be represented by a set of differential equations that the model must satisfy. We develop M-procedures for estimating parameters and testing hypotheses of interest about these parameters in nonlinear regression models for repeated measurement data. Under regularity conditions, the asymptotic properties of the M-procedures are presented, including the uniform linearity, normality and consistency. The computation of the M-estimators of the model parameters is performed with iterative procedures, similar to Newton–Raphson and Fisher's scoring methods. The methodology is illustrated by using a multivariate logistic regression model with real data, along with a simulation study.  相似文献   

19.
Value at Risk (VaR) forecasts can be produced from conditional autoregressive VaR models, estimated using quantile regression. Quantile modeling avoids a distributional assumption, and allows the dynamics of the quantiles to differ for each probability level. However, by focusing on a quantile, these models provide no information regarding expected shortfall (ES), which is the expectation of the exceedances beyond the quantile. We introduce a method for predicting ES corresponding to VaR forecasts produced by quantile regression models. It is well known that quantile regression is equivalent to maximum likelihood based on an asymmetric Laplace (AL) density. We allow the density's scale to be time-varying, and show that it can be used to estimate conditional ES. This enables a joint model of conditional VaR and ES to be estimated by maximizing an AL log-likelihood. Although this estimation framework uses an AL density, it does not rely on an assumption for the returns distribution. We also use the AL log-likelihood for forecast evaluation, and show that it is strictly consistent for the joint evaluation of VaR and ES. Empirical illustration is provided using stock index data. Supplementary materials for this article are available online.  相似文献   

20.
Physical activity measurements derived from self-report surveys are prone to measurement errors. Monitoring devices like accelerometers offer more objective measurements of physical activity, but are impractical for use in large-scale surveys. A model capable of predicting objective measurements of physical activity from self-reports would offer a practical alternative to obtaining measurements directly from monitoring devices. Using data from National Health and Nutrition Examination Survey 2003–2006, we developed and validated models for predicting objective physical activity from self-report variables and other demographic characteristics. The prediction intervals produced by the models were large, suggesting that the ability to predict objective physical activity for individuals from self-reports is limited.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号