首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We propose a new model for regression and dependence analysis when addressing spatial data with possibly heavy tails and an asymmetric marginal distribution. We first propose a stationary process with t marginals obtained through scale mixing of a Gaussian process with an inverse square root process with Gamma marginals. We then generalize this construction by considering a skew‐Gaussian process, thus obtaining a process with skew‐t marginal distributions. For the proposed (skew) t process, we study the second‐order and geometrical properties and in the t case, we provide analytic expressions for the bivariate distribution. In an extensive simulation study, we investigate the use of the weighted pairwise likelihood as a method of estimation for the t process. Moreover we compare the performance of the optimal linear predictor of the t process versus the optimal Gaussian predictor. Finally, the effectiveness of our methodology is illustrated by analyzing a georeferenced dataset on maximum temperatures in Australia.  相似文献   

2.
We derive rates of contraction of posterior distributions on non‐parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l 2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l 2 loss is strongly suboptimal and we provide a lower bound on the rate.  相似文献   

3.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

4.
We discuss a class of difference‐based estimators for the autocovariance in nonparametric regression when the signal is discontinuous and the errors form a stationary m‐dependent process. These estimators circumvent the particularly challenging task of pre‐estimating such an unknown regression function. We provide finite‐sample expressions of their mean squared errors for piecewise constant signals and Gaussian errors. Based on this, we derive biased‐optimized estimates that do not depend on the unknown autocovariance structure. Notably, for positively correlated errors, that part of the variance of our estimators that depend on the signal is minimal as well. Further, we provide sufficient conditions for ‐consistency; this result is extended to piecewise Hölder regression with non‐Gaussian errors. We combine our biased‐optimized autocovariance estimates with a projection‐based approach and derive covariance matrix estimates, a method that is of independent interest. An R package, several simulations and an application to biophysical measurements complement this paper.  相似文献   

5.
Multivariate mixture regression models can be used to investigate the relationships between two or more response variables and a set of predictor variables by taking into consideration unobserved population heterogeneity. It is common to take multivariate normal distributions as mixing components, but this mixing model is sensitive to heavy-tailed errors and outliers. Although normal mixture models can approximate any distribution in principle, the number of components needed to account for heavy-tailed distributions can be very large. Mixture regression models based on the multivariate t distributions can be considered as a robust alternative approach. Missing data are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this paper, we propose a multivariate t mixture regression model with missing information to model heterogeneity in regression function in the presence of outliers and missing values. Along with the robust parameter estimation, our proposed method can be used for (i) visualization of the partial correlation between response variables across latent classes and heterogeneous regressions, and (ii) outlier detection and robust clustering even under the presence of missing values. We also propose a multivariate t mixture regression model using MM-estimation with missing information that is robust to high-leverage outliers. The proposed methodologies are illustrated through simulation studies and real data analysis.  相似文献   

6.
We develop Bayesian models for density regression with emphasis on discrete outcomes. The problem of density regression is approached by considering methods for multivariate density estimation of mixed scale variables, and obtaining conditional densities from the multivariate ones. The approach to multivariate mixed scale outcome density estimation that we describe represents discrete variables, either responses or covariates, as discretised versions of continuous latent variables. We present and compare several models for obtaining these thresholds in the challenging context of count data analysis where the response may be over‐ and/or under‐dispersed in some of the regions of the covariate space. We utilise a nonparametric mixture of multivariate Gaussians to model the directly observed and the latent continuous variables. The paper presents a Markov chain Monte Carlo algorithm for posterior sampling, sufficient conditions for weak consistency, and illustrations on density, mean and quantile regression utilising simulated and real datasets.  相似文献   

7.
Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of high‐dimensional linear regression. In recent work, a Bayesian shrinkage approach using generalized double Pareto priors has been proposed. Several useful properties of this approach, including the derivation of a tractable three‐block Gibbs sampler to sample from the resulting posterior density, have been established. We show that the Markov operator corresponding to this three‐block Gibbs sampler is not Hilbert–Schmidt. We propose a simpler two‐block Gibbs sampler and show that the corresponding Markov operator is trace class (and hence Hilbert–Schmidt). Establishing the trace class property for the proposed two‐block Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain central limit theorem, which in turn enables computation of asymptotic standard errors for Markov chain‐based estimates of posterior quantities. Secondly, because the proposed Gibbs sampler uses two blocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the two‐block sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency resulting from our proposed Markov chains using simulated and real examples.  相似文献   

8.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

9.
This paper aims at introducing a Bayesian robust error-in-variable regression model in which the dependent variable is censored. We extend previous works by assuming a multivariate t distribution for jointly modelling the behaviour of the errors and the latent explanatory variable. Inference is done under the Bayesian paradigm. We use a data augmentation approach and develop a Markov chain Monte Carlo algorithm to sample from the posterior distributions. We run a Monte Carlo study to evaluate the efficiency of the posterior estimators in different settings. We compare the proposed model to three other models previously discussed in the literature. As a by-product we also provide a Bayesian analysis of the t-tobit model. We fit all four models to analyse the 2001 Medical Expenditure Panel Survey data.  相似文献   

10.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

11.
Sampling the correlation matrix (R) plays an important role in statistical inference for correlated models. There are two main constraints on a correlation matrix: positive definiteness and fixed diagonal elements. These constraints make sampling R difficult. In this paper, an efficient generalized parameter expanded re-parametrization and Metropolis-Hastings (GPX-RPMH) algorithm for sampling a correlation matrix is proposed. Drawing all components of R simultaneously from its full conditional distribution is realized by first drawing a covariance matrix from the derived parameter expanded candidate density (PXCD), and then translating it back to a correlation matrix and accepting it according to a Metropolis-Hastings (M-H) acceptance rate. The mixing rate in the M-H step can be adjusted through a class of tuning parameters embedded in the generalized candidate prior (GCP), which is chosen for R to derive the PXCD. This algorithm is illustrated using multivariate regression (MVR) models and a simulation study shows that the performance of the GPX-RPMH algorithm is more efficient than that of other methods.  相似文献   

12.
Abstract. In this article, we maximize the efficiency of a multivariate S‐estimator under a constraint on the breakdown point. In the linear regression model, it is known that the highest possible efficiency of a maximum breakdown S‐estimator is bounded above by 33 per cent for Gaussian errors. We prove the surprising result that in dimensions larger than one, the efficiency of a maximum breakdown S‐estimator of location and scatter can get arbitrarily close to 100 per cent, by an appropriate selection of the loss function.  相似文献   

13.
Variable selection over a potentially large set of covariates in a linear model is quite popular. In the Bayesian context, common prior choices can lead to a posterior expectation of the regression coefficients that is a sparse (or nearly sparse) vector with a few nonzero components, those covariates that are most important. This article extends the “global‐local” shrinkage idea to a scenario where one wishes to model multiple response variables simultaneously. Here, we have developed a variable selection method for a K‐outcome model (multivariate regression) that identifies the most important covariates across all outcomes. The prior for all regression coefficients is a mean zero normal with coefficient‐specific variance term that consists of a predictor‐specific factor (shared local shrinkage parameter) and a model‐specific factor (global shrinkage term) that differs in each model. The performance of our modeling approach is evaluated through simulation studies and a data example.  相似文献   

14.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

15.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   

16.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

17.
The authors consider the correlation between two arbitrary functions of the data and a parameter when the parameter is regarded as a random variable with given prior distribution. They show how to compute such a correlation and use closed form expressions to assess the dependence between parameters and various classical or robust estimators thereof, as well as between p‐values and posterior probabilities of the null hypothesis in the one‐sided testing problem. Other applications involve the Dirichlet process and stationary Gaussian processes. Using this approach, the authors also derive a general nonparametric upper bound on Bayes risks.  相似文献   

18.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

19.
The common choices of frailty distribution in lifetime data models include the Gamma and Inverse Gaussian distributions. We present diagnostic plots for these distributions when frailty operates in a proportional hazards framework. Firstly, we present plots based on the form of the unconditional survival function when the baseline hazard is assumed to be Weibull. Secondly, we base a plot on a closure property that applies for any baseline hazard, namely, that the frailty distribution among survivors at time t has the same form as the original distribution, with the same shape parameter but different scale parameter. We estimate the shape parameter at different values of t and examine whether it is constant, that is, whether plotted values form a straight line parallel to the time axis. We provide simulation results assuming Weibull baseline hazard and an example to illustrate the methods.  相似文献   

20.
The exact density distribution of the non‐linear least squares estimator in the one‐parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the non‐linear regression with an arbitrary number of linear parameters and one intrinsically non‐linear parameter. For a very special non‐linear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieler almost a century ago, unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the non‐linear least squares are illustrated, such as non‐existence and/or multiple solutions, as major factors contributing to poor density approximation. The non‐linear Markov–Gauss theorem is formulated on the basis of the near exact EE density approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号