首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We establish consistency of posterior distribution when a Gaussian process prior is used as a prior distribution for the unknown binary regression function. Specifically, we take the work of Ghosal and Roy [2006. Posterior consistency of Gaussian process prior for nonparametric binary regression. Ann. Statist. 34, 2413–2429] as our starting point, and then weaken their assumptions on the smoothness of the Gaussian process kernel while retaining a stronger yet applicable condition about design points. Furthermore, we extend their results to multi-dimensional covariates under a weaker smoothness condition on the Gaussian process. Finally, we study the extent to which posterior consistency can be achieved under a general model where, when additional hyperparameters in the covariance function of a Gaussian process are involved.  相似文献   

2.
In this article, a general approach to latent variable models based on an underlying generalized linear model (GLM) with factor analysis observation process is introduced. We call these models Generalized Linear Factor Models (GLFM). The observations are produced from a general model framework that involves observed and latent variables that are assumed to be distributed in the exponential family. More specifically, we concentrate on situations where the observed variables are both discretely measured (e.g., binomial, Poisson) and continuously distributed (e.g., gamma). The common latent factors are assumed to be independent with a standard multivariate normal distribution. Practical details of training such models with a new local expectation-maximization (EM) algorithm, which can be considered as a generalized EM-type algorithm, are also discussed. In conjunction with an approximated version of the Fisher score algorithm (FSA), we show how to calculate maximum likelihood estimates of the model parameters, and to yield inferences about the unobservable path of the common factors. The methodology is illustrated by an extensive Monte Carlo simulation study and the results show promising performance.  相似文献   

3.
The HastingsMetropolis algorithm is a general MCMC method for sampling from a density known up to a constant. Geometric convergence of this algorithm has been proved under conditions relative to the instrumental (or proposal) distribution. We present an inhomogeneous HastingsMetropolis algorithm for which the proposal density approximates the target density, as the number of iterations increases. The proposal density at the n th step is a non-parametric estimate of the density of the algorithm, and uses an increasing number of i.i.d. copies of the Markov chain. The resulting algorithm converges (in n ) geometrically faster than a HastingsMetropolis algorithm with any fixed proposal distribution. The case of a strictly positive density with compact support is presented first, then an extension to more general densities is given. We conclude by proposing a practical way of implementation for the algorithm, and illustrate it over simulated examples.  相似文献   

4.
Tim Fischer  Udo Kamps 《Statistics》2013,47(1):142-158
There are several well-known mappings which transform the first r common order statistics in a sample of size n from a standard uniform distribution to a full vector of dimension r of order statistics in a sample of size r from a uniform distribution. Continuing the results reported in a previous paper by the authors, it is shown that transformations of these types do not lead to order statistics from an i.i.d. sample of random variables, in general, when being applied to order statistics from non-uniform distributions. By accepting the loss of one dimension, a structure-preserving transformation exists for power function distributions.  相似文献   

5.
A new Bayesian state and parameter learning algorithm for multiple target tracking models with image observations are proposed. Specifically, a Markov chain Monte Carlo algorithm is designed to sample from the posterior distribution of the unknown time-varying number of targets, their birth, death times and states as well as the model parameters, which constitutes the complete solution to the specific tracking problem we consider. The conventional approach is to pre-process the images to extract point observations and then perform tracking, i.e. infer the target trajectories. We model the image generation process directly to avoid any potential loss of information when extracting point observations using a pre-processing step that is decoupled from the inference algorithm. Numerical examples show that our algorithm has improved tracking performance over commonly used techniques, for both synthetic examples and real florescent microscopy data, especially in the case of dim targets with overlapping illuminated regions.  相似文献   

6.
This article develops an algorithm for estimating parameters of general phase-type (PH) distribution based on Bayes estimation. The idea of Bayes estimation is to regard parameters as random variables, and the posterior distribution of parameters which is updated by the likelihood function provides estimators of parameters. One of the advantages of Bayes estimation is to evaluate uncertainty of estimators. In this article, we propose a fast algorithm for computing posterior distributions approximately, based on variational approximation. We formulate the optimal variational posterior distributions for PH distributions and develop the efficient computation algorithm for the optimal variational posterior distributions of discrete and continuous PH distributions.  相似文献   

7.
Summary. In geostatistics it is common practice to assume that the underlying spatial process is stationary and isotropic, i.e. the spatial distribution is unchanged when the origin of the index set is translated and under rotation about the origin. However, in environmental problems, such assumptions are not realistic since local influences in the correlation structure of the spatial process may be found in the data. The paper proposes a Bayesian model to address the anisot- ropy problem. Following Sampson and Guttorp, we define the correlation function of the spatial process by reference to a latent space, denoted by D , where stationarity and isotropy hold. The space where the gauged monitoring sites lie is denoted by G . We adopt a Bayesian approach in which the mapping between G and D is represented by an unknown function d (·). A Gaussian process prior distribution is defined for d (·). Unlike the Sampson–Guttorp approach, the mapping of both gauged and ungauged sites is handled in a single framework, and predictive inferences take explicit account of uncertainty in the mapping. Markov chain Monte Carlo methods are used to obtain samples from the posterior distributions. Two examples are discussed: a simulated data set and the solar radiation data set that also was analysed by Sampson and Guttorp.  相似文献   

8.
A general sampling algorithm for nested Archimedean copulas was recently suggested. It is given in two different forms, a recursive or an explicit one. The explicit form allows for a simpler version of the algorithm which is numerically more stable and faster since less function evaluations are required. The algorithm can also be given in general form, not being restricted to a particular nesting such as fully nested Archimedean copulas. Further, several examples are given.  相似文献   

9.
A robust approach to the analysis of epidemic data is suggested. This method is based on a natural extension of M-estimation for i.i.d. observations where the distribution may be asymmetric. It is discussed initially in the context of a general discrete time stochastic process before being applied to previously studied epidemic models. In particular we consider a class of chain binomial models and models based on time dependent branching processes. Robustness and efficiency properties are studied through simulation and some previously analysed data sets are considered.  相似文献   

10.
Sufficient dimension reduction methods aim to reduce the dimensionality of predictors while preserving regression information relevant to the response. In this article, we develop Minimum Average Deviance Estimation (MADE) methodology for sufficient dimension reduction. The purpose of MADE is to generalize Minimum Average Variance Estimation (MAVE) beyond its assumption of additive errors to settings where the outcome follows an exponential family distribution. As in MAVE, a local likelihood approach is used to learn the form of the regression function from the data and the main parameter of interest is a dimension reduction subspace. To estimate this parameter within its natural space, we propose an iterative algorithm where one step utilizes optimization on the Stiefel manifold. MAVE is seen to be a special case of MADE in the case of Gaussian outcomes with a common variance. Several procedures are considered to estimate the reduced dimension and to predict the outcome for an arbitrary covariate value. Initial simulations and data analysis examples yield encouraging results and invite further exploration of the methodology.  相似文献   

11.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   

12.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

13.
In this paper, we consider an estimation problem of the matrix of the regression coefficients in multivariate regression models with unknown change‐points. More precisely, we consider the case where the target parameter satisfies an uncertain linear restriction. Under general conditions, we propose a class of estimators that includes as special cases shrinkage estimators (SEs) and both the unrestricted and restricted estimator. We also derive a more general condition for the SEs to dominate the unrestricted estimator. To this end, we extend some results underlying the multidimensional version of the mixingale central limit theorem as well as some important identities for deriving the risk function of SEs. Finally, we present some simulation studies that corroborate the theoretical findings.  相似文献   

14.
DISTRIBUTIONAL CHARACTERIZATIONS THROUGH SCALING RELATIONS   总被引:2,自引:0,他引:2  
Investigated here are aspects of the relation between the laws of X and Y where X is represented as a randomly scaled version of Y. In the case that the scaling has a beta law, the law of Y is expressed in terms of the law of X. Common continuous distributions are characterized using this beta scaling law, and choosing the distribution function of Y as a weighted version of the distribution function of X, where the weight is a power function. It is shown, without any restriction on the law of the scaling, but using a one‐parameter family of weights which includes the power weights, that characterizations can be expressed in terms of known results for the power weights. Characterizations in the case where the distribution function of Y is a positive power of the distribution function of X are examined in two special cases. Finally, conditions are given for existence of inverses of the length‐bias and stationary‐excess operators.  相似文献   

15.
n = 2 and 3. Here we characterize all testing problems with i.i.d. random variables where an additional observation fails to improve the power. Received: August 31, 2000; revised version: January 10, 2001  相似文献   

16.
Many problems in Statistics involve maximizing a multinomial likelihood over a restricted region. In this paper, we consider instead maximizing a weighted multinomial likelihood. We show that a dual problem always exits which is frequently more tractable and that a solution to the dual problem leads directly to a solution of the primal problem. Moreover, the form of the dual problem suggests an iterative algorithm for solving the MLE problem when the constraint region can be written as a finite intersection of cones. We show that this iterative algorithm is guaranteed to converge to the true solution and show that when the cones are isotonic, this algorithm is a version of Dykstra's algorithm (Dykstra, J. Amer. Statist. Assoc. 78 (1983) 837–842) for the special case of least squares projection onto the intersection of isotonic cones. We give several meaningful examples to illustrate our results. In particular, we obtain the nonparametric maximum likelihood estimator of a monotone density function in the presence of selection bias.  相似文献   

17.
This paper studies quantile estimation using Bernstein–Durrmeyer polynomials in terms of its mean squared error and integrated mean squared error including rates of convergence as well as its asymptotic distribution. Whereas the rates of convergence are achieved for i.i.d. samples, we also show that the consistency more or less directly follows from the consistency of the sample quantiles, such that our proposal can also be applied for risk measurement in finance and insurance. Furthermore, an improved estimator based on an error-correction approach is proposed for which a general consistency result is established. A crucial issue is how to select the degree of Bernstein–Durrmeyer polynomials. We propose a novel data-adaptive approach that controls the number of modes of the corresponding density estimator. Its consistency including an uniform error bound as well as its limiting distribution in the sense of a general invariance principle are established. The finite sample properties are investigated by a Monte Carlo study. Finally, the results are illustrated by an application to photovoltaic energy research.  相似文献   

18.
Many neuroscience experiments record sequential trajectories where each trajectory consists of oscillations and fluctuations around zero. Such trajectories can be viewed as zero-mean functional data. When there are structural breaks in higher-order moments, it is not always easy to spot these by mere visual inspection. Motivated by this challenging problem in brain signal analysis, we propose a detection and testing procedure to find the change point in functional covariance. The detection procedure is based on the cumulative sum statistics (CUSUM). The fully functional testing procedure relies on a null distribution which depends on infinitely many unknown parameters, though in practice only a finite number of these parameters can be included for the hypothesis test of the existence of change point. This paper provides some theoretical insights on the influence of the number of parameters. Meanwhile, the asymptotic properties of the estimated change point are developed. The effectiveness of the proposed method is numerically validated in simulation studies and an application to investigate changes in rat brain signals following an experimentally-induced stroke.  相似文献   

19.
Non-parametric Regression with Dependent Censored Data   总被引:1,自引:0,他引:1  
Abstract.  Let ( X i , Y i ) ( i = 1 ,…, n ) be n replications of a random vector ( X , Y  ), where Y is supposed to be subject to random right censoring. The data ( X i , Y i ) are assumed to come from a stationary α -mixing process. We consider the problem of estimating the function m ( x ) = E ( φ ( Y ) |  X = x ), for some known transformation φ . This problem is approached in the following way: first, we introduce a transformed variable     , that is not subject to censoring and satisfies the relation     , and then we estimate m ( x ) by applying local linear regression techniques. As a by-product, we obtain a general result on the uniform rate of convergence of kernel type estimators of functionals of an unknown distribution function, under strong mixing assumptions.  相似文献   

20.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号