首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 365 毫秒
1.
The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.  相似文献   

2.
Models for geostatistical data introduce spatial dependence in the covariance matrix of location-specific random effects. This is usually defined to be a parametric function of the distances between locations. Bayesian formulations of such models overcome asymptotic inference and estimation problems involved in maximum likelihood-based approaches and can be fitted using Markov chain Monte Carlo (MCMC) simulation. The MCMC implementation, however, requires repeated inversions of the covariance matrix which makes the problem computationally intensive, especially for large number of locations. In the present work, we propose to convert the spatial covariance matrix to a sparse matrix and compare a number of numerical algorithms especially suited within the MCMC framework in order to accelerate large matrix inversion. The algorithms are assessed empirically on simulated datasets of different size and sparsity. We conclude that the band solver applied after ordering the distance matrix reduces the computational time in inverting covariance matrices substantially.  相似文献   

3.
Discrete choice models describe the choices made by decision makers among alternatives and play an important role in transportation planning, marketing research and other applications. The mixed multinomial logit (MMNL) model is a popular discrete choice model that captures heterogeneity in the preferences of decision makers through random coefficients. While Markov chain Monte Carlo methods provide the Bayesian analogue to classical procedures for estimating MMNL models, computations can be prohibitively expensive for large datasets. Approximate inference can be obtained using variational methods at a lower computational cost with competitive accuracy. In this paper, we develop variational methods for estimating MMNL models that allow random coefficients to be correlated in the posterior and can be extended easily to large-scale datasets. We explore three alternatives: (1) Laplace variational inference, (2) nonconjugate variational message passing and (3) stochastic linear regression. Their performances are compared using real and simulated data. To accelerate convergence for large datasets, we develop stochastic variational inference for MMNL models using each of the above alternatives. Stochastic variational inference allows data to be processed in minibatches by optimizing global variational parameters using stochastic gradient approximation. A novel strategy for increasing minibatch sizes adaptively within stochastic variational inference is proposed.  相似文献   

4.
Modern statistical applications involving large data sets have focused attention on statistical methodologies which are both efficient computationally and able to deal with the screening of large numbers of different candidate models. Here we consider computationally efficient variational Bayes approaches to inference in high-dimensional heteroscedastic linear regression, where both the mean and variance are described in terms of linear functions of the predictors and where the number of predictors can be larger than the sample size. We derive a closed form variational lower bound on the log marginal likelihood useful for model selection, and propose a novel fast greedy search algorithm on the model space which makes use of one-step optimization updates to the variational lower bound in the current model for screening large numbers of candidate predictor variables for inclusion/exclusion in a computationally thrifty way. We show that the model search strategy we suggest is related to widely used orthogonal matching pursuit algorithms for model search but yields a framework for potentially extending these algorithms to more complex models. The methodology is applied in simulations and in two real examples involving prediction for food constituents using NIR technology and prediction of disease progression in diabetes.  相似文献   

5.
In this article we focus on logistic regression models for binary responses. An existing result shows that the log-odds can be modelled depending on the log of the ratio between the conditional densities of the predictors given the response variable. This suggests that relevant statistical information could be extracted investigating the inverse problem. Thus, we present different methods for studying the log-density ratio through graphs, which allow us to select which predictors are needed, and how they should be included in a logistic regression model. We also discuss data analysis examples based on real datasets available in literature in order to provide further insights into the methodology proposed.  相似文献   

6.
Summary. In many biomedical studies, covariates are subject to measurement error. Although it is well known that the regression coefficients estimators can be substantially biased if the measurement error is not accommodated, there has been little study of the effect of covariate measurement error on the estimation of the dependence between bivariate failure times. We show that the dependence parameter estimator in the Clayton–Oakes model can be considerably biased if the measurement error in the covariate is not accommodated. In contrast with the typical bias towards the null for marginal regression coefficients, the dependence parameter can be biased in either direction. We introduce a bias reduction technique for the bivariate survival function in copula models while assuming an additive measurement error model and replicated measurement for the covariates, and we study the large and small sample properties of the dependence parameter estimator proposed.  相似文献   

7.
Due to rapid data growth, statistical analysis of massive datasets often has to be carried out in a distributed fashion, either because several datasets stored in separate physical locations are all relevant to a given problem, or simply to achieve faster (parallel) computation through a divide-and-conquer scheme. In both cases, the challenge is to obtain valid inference that does not require processing all data at a single central computing node. We show that for a very widely used class of spatial low-rank models, which can be written as a linear combination of spatial basis functions plus a fine-scale-variation component, parallel spatial inference and prediction for massive distributed data can be carried out exactly, meaning that the results are the same as for a traditional, non-distributed analysis. The communication cost of our distributed algorithms does not depend on the number of data points. After extending our results to the spatio-temporal case, we illustrate our methodology by carrying out distributed spatio-temporal particle filtering inference on total precipitable water measured by three different satellite sensor systems.  相似文献   

8.
Although the effect of missing data on regression estimates has received considerable attention, their effect on predictive performance has been neglected. We studied the performance of three missing data strategies—omission of records with missing values, replacement with a mean and imputation based on regression—on the predictive performance of logistic regression (LR), classification tree (CT) and neural network (NN) models in the presence of data missing completely at random (MCAR). Models were constructed using datasets of size 500 simulated from a joint distribution of binary and continuous predictors including nonlinearities, collinearity and interactions between variables. Though omission produced models that fit better on the data from which the models were developed, imputation was superior on average to omission for all models when evaluating the receiver operating characteristic (ROC) curve area, mean squared error (MSE), pooled variance across outcome categories and calibration X 2 on an independently generated test set. However, in about one-third of simulations, omission performed better. Performance was also more variable with omission including quite a few instances of extremely poor performance. Replacement and imputation generally produced similar results except with neural networks for which replacement, the strategy typically used in neural network algorithms, was inferior to imputation. Missing data affected simpler models much less than they did more complex models such as generalized additive models that focus on local structure For moderate sized datasets, logistic regressions that use simple nonlinear structures such as quadratic terms and piecewise linear splines appear to be at least as robust to randomly missing values as neural networks and classification trees.  相似文献   

9.
In this article, we discuss the utility of tolerance intervals for various regression models. We begin with a discussion of tolerance intervals for linear and nonlinear regression models. We then introduce a novel method for constructing nonparametric regression tolerance intervals by extending the well-established procedure for univariate data. Simulation results and application to real datasets are presented to help visualize regression tolerance intervals and to demonstrate that the methods we discuss have coverage probabilities very close to the specified nominal confidence level.  相似文献   

10.
In this article, we propose a parametric model for the distribution of time to first event when events are overdispersed and can be properly fitted by a Negative Binomial distribution. This is a very common situation in medical statistics, when the occurrence of events is summarized as a count for each patient and the simple Poisson model is not adequate to account for overdispersion of data. In this situation, studying the time of occurrence of the first event can be of interest. From the Negative Binomial distribution of counts, we derive a new parametric model for time to first event and apply it to fit the distribution of time to first relapse in multiple sclerosis (MS). We develop the regression model with methods for covariate estimation. We show that, as the Negative Binomial model properly fits relapse counts data, this new model matches quite perfectly the distribution of time to first relapse, as tested in two large datasets of MS patients. Finally we compare its performance, when fitting time to first relapse in MS, with other models widely used in survival analysis (the semiparametric Cox model and the parametric exponential, Weibull, log-logistic and log-normal models).  相似文献   

11.
The complete-data model that underlies an Expectation-Maximization (EM) algorithm must have a parameter space that coincides with the parameter space of the observed-data model. Otherwise, maximization of the observed-data log-likelihood will be carried out over a space that does not coincide with the desired parameter space. In some contexts, however, a natural complete-data model may be defined only for parameter values within a subset of the observed-data parameter space. In this paper we discuss situations where this can still be useful if the complete-data model can be viewed as a member of a finite family of complete-data models that have parameter spaces which collectively cover the observed-data parameter space. Such a family of complete-data models defines a family of EM algorithms which together lead to a finite collection of constrained maxima of the observed-data log-likelihood. Maximization of the log-likelihood function over the full parameter space then involves identifying the constrained maximum that achieves the greatest log-likelihood value. Since optimization over a finite collection of candidates is referred to as combinatorial optimization, we refer to such a family of EM algorithms as a combinatorial EM (CEM) algorithm. As well as discussing the theoretical concepts behind CEM algorithms, we discuss strategies for improving the computational efficiency when the number of complete-data models is large. Various applications of CEM algorithms are also discussed, ranging from simple examples that illustrate the concepts, to more substantive examples that demonstrate the usefulness of CEM algorithms in practice.  相似文献   

12.
Additive varying coefficient models are a natural extension of multiple linear regression models, allowing the regression coefficients to be functions of other variables. Therefore these models are more flexible to model more complex dependencies in data structures. In this paper we consider the problem of selecting in an automatic way the significant variables among a large set of variables, when the interest is on a given response variable. In recent years several grouped regularization methods have been proposed and in this paper we present these under one unified framework in this varying coefficient model context. For each of the discussed grouped regularization methods we investigate the optimization problem to be solved, possible algorithms for doing so, and the variable and estimation consistency of the methods. We investigate the finite-sample performance of these methods, in a comparative study, and illustrate them on real data examples.  相似文献   

13.
Spatial outliers are spatially referenced objects whose non spatial attribute values are significantly different from the corresponding values in their spatial neighborhoods. In other words, a spatial outlier is a local instability or an extreme observation that deviates significantly in its spatial neighborhood, but possibly not be in the entire dataset. In this article, we have proposed a novel spatial outlier detection algorithm, location quotient (LQ) for multiple attributes spatial datasets, and compared its performance with the well-known mean and median algorithms for multiple attributes spatial datasets, in the literature. In particular, we have applied the mean, median, and LQ algorithms on a real dataset and on simulated spatial datasets of 13 different sizes to compare their performances. In addition, we have calculated area under the curve values in all the cases, which shows that our proposed algorithm is more powerful than the mean and median algorithms in almost all the considered cases and also plotted receiver operating characteristic curves in some cases.  相似文献   

14.
We propose zero-inflated statistical models based on the generalized Hermite distribution for simultaneously modelling of excess zeros, over/underdispersion, and multimodality. These new models are parsimonious yet remarkably flexible allowing the covariates to be introduced directly through the mean, dispersion, and zero-inflated parameters. To accommodate the interval inequality constraint for the dispersion parameter, we present a new link function for the covariate-dependent dispersion regression model. We derive score tests for zero inflation in both covariate-free and covariate-dependent models. Both the score test and the likelihood-ratio test are conducted to examine the validity of zero inflation. The score test provides a useful tool when computing the likelihood-ratio statistic proves to be difficult. We analyse several hotel booking cancellation datasets extracted from two recently published real datasets from a resort hotel and a city hotel. These extracted cancellation datasets reveal complex features of excess zeros, over/underdispersion, and multimodality simultaneously making them difficult to analyse with existing approaches. The application of the proposed methods to the cancellation datasets illustrates the usefulness and flexibility of the models.  相似文献   

15.
Abstract. When applicable, an assumed monotonicity property of the regression function w.r.t. covariates has a strong stabilizing effect on the estimates. Because of this, other parametric or structural assumptions may not be needed at all. Although monotonic regression in one dimension is well studied, the question remains whether one can find computationally feasible generalizations to multiple dimensions. Here, we propose a non‐parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure. The monotonic construction is based on marked point processes, where the random point locations and the associated marks (function levels) together form piecewise constant realizations of the regression surfaces. The actual inference is based on model‐averaged results over the realizations. The monotonicity of the construction is enforced by partial ordering constraints, which allows it to asymptotically, with increasing density of support points, approximate the family of all monotonic bounded continuous functions.  相似文献   

16.
On Block Updating in Markov Random Field Models for Disease Mapping   总被引:3,自引:0,他引:3  
Gaussian Markov random field (GMRF) models are commonly used to model spatial correlation in disease mapping applications. For Bayesian inference by MCMC, so far mainly single-site updating algorithms have been considered. However, convergence and mixing properties of such algorithms can be extremely poor due to strong dependencies of parameters in the posterior distribution. In this paper, we propose various block sampling algorithms in order to improve the MCMC performance. The methodology is rather general, allows for non-standard full conditionals, and can be applied in a modular fashion in a large number of different scenarios. For illustration we consider three different applications: two formulations for spatial modelling of a single disease (with and without additional unstructured parameters respectively), and one formulation for the joint analysis of two diseases. The results indicate that the largest benefits are obtained if parameters and the corresponding hyperparameter are updated jointly in one large block. Implementation of such block algorithms is relatively easy using methods for fast sampling of Gaussian Markov random fields ( Rue, 2001 ). By comparison, Monte Carlo estimates based on single-site updating can be rather misleading, even for very long runs. Our results may have wider relevance for efficient MCMC simulation in hierarchical models with Markov random field components.  相似文献   

17.
This article studies influence diagnostics and estimation algorithms for Powell's symmetrically censored least squares estimator. The proposed measures of influence are based on one-step approximations to the analogous deletion diagnostics used in least squares regression and can be conveniently constructed using a Newton-type algorithm. Additionally, it is found that this algorithm can be used to substantially reduce the computational burden of the estimator. The results of the article are illustrated with an application.  相似文献   

18.
Variational Bayes (VB) estimation is a fast alternative to Markov Chain Monte Carlo for performing approximate Baesian inference. This procedure can be an efficient and effective means of analyzing large datasets. However, VB estimation is often criticised, typically on empirical grounds, for being unable to produce valid statistical inferences. In this article we refute this criticism for one of the simplest models where Bayesian inference is not analytically tractable, that is, the Bayesian linear model (for a particular choice of priors). We prove that under mild regularity conditions, VB based estimators enjoy some desirable frequentist properties such as consistency and can be used to obtain asymptotically valid standard errors. In addition to these results we introduce two VB information criteria: the variational Akaike information criterion and the variational Bayesian information criterion. We show that variational Akaike information criterion is asymptotically equivalent to the frequentist Akaike information criterion and that the variational Bayesian information criterion is first order equivalent to the Bayesian information criterion in linear regression. These results motivate the potential use of the variational information criteria for more complex models. We support our theoretical results with numerical examples.  相似文献   

19.
This study introduces fast marginal maximum likelihood (MML) algorithms for estimating the tuning (shrinkage) parameter(s) of the ridge and power ridge regression models, and an automatic plug-in MML estimator for the generalized ridge regression model, in a Bayesian framework. These methods are applicable to multicollinear or singular covariate design matrices, including matrices where the number of covariates exceeds the sample size. According to analyses of many real and simulated datasets, these MML-based ridge methods tend to compare favorably to other tuning parameter selection methods, in terms of computation speed, prediction accuracy, and ability to detect relevant covariates.  相似文献   

20.
Maximum likelihood (ML) estimation with spatial econometric models is a long-standing problem that finds application in several areas of economic importance. The problem is particularly challenging in the presence of missing data, since there is an implied dependence between all units, irrespective of whether they are observed or not. Out of the several approaches adopted for ML estimation in this context, that of LeSage and Pace [Models for spatially dependent missing data. J Real Estate Financ Econ. 2004;29(2):233–254] stands out as one of the most commonly used with spatial econometric models due to its ability to scale with the number of units. Here, we review their algorithm, and consider several similar alternatives that are also suitable for large datasets. We compare the methods through an extensive empirical study and conclude that, while the approximate approaches are suitable for large sampling ratios, for small sampling ratios the only reliable algorithms are those that yield exact ML or restricted ML estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号