首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present theoretical results on the random wavelet coefficients covariance structure. We use simple properties of the coefficients to derive a recursive way to compute the within- and across-scale covariances. We point out a useful link between the algorithm proposed and the two-dimensional discrete wavelet transform. We then focus on Bayesian wavelet shrinkage for estimating a function from noisy data. A prior distribution is imposed on the coefficients of the unknown function. We show how our findings on the covariance structure make it possible to specify priors that take into account the full correlation between coefficients through a parsimonious number of hyperparameters. We use Markov chain Monte Carlo methods to estimate the parameters and illustrate our method on bench-mark simulated signals.  相似文献   

2.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   

3.
This paper concerns wavelet regression using a block thresholding procedure. Block thresholding methods utilize neighboring wavelet coefficients information to increase estimation accuracy. We propose to construct a data-driven block thresholding procedure using the smoothly clipped absolute deviation (SCAD) penalty. A simulation study demonstrates competitive finite sample performance of the proposed estimator compared to existing methods. We also show that the proposed estimator achieves optimal convergence rates in Besov spaces.  相似文献   

4.
We consider a heteroscedastic convolution density model under the “ordinary smooth assumption.” We introduce a new adaptive wavelet estimator based on term-by-term hard thresholding rule. Its asymptotic properties are explored via the minimax approach under the mean integrated squared error over Besov balls. We prove that our estimator attains near optimal rates of convergence (lower bounds are determined). Simulation results are reported to support our theoretical findings.  相似文献   

5.
Here we consider wavelet-based identification and estimation of a censored nonparametric regression model via block thresholding methods and investigate their asymptotic convergence rates. We show that these estimators, based on block thresholding of empirical wavelet coefficients, achieve optimal convergence rates over a large range of Besov function classes, and in particular enjoy those rates without the extraneous logarithmic penalties that are usually suffered by term-by-term thresholding methods. This work is extension of results in Li et al. (2008). The performance of proposed estimator is investigated by a numerical study.  相似文献   

6.
The estimation of the regression function in the biased nonparametric regression model is investigated. We propose and develop a new wavelet-based methodology for this problem. In particular, an adaptive hard thresholding wavelet estimator is constructed. Under mild assumptions on the model, we prove that it enjoys powerful mean integrated squared error properties over Besov balls.  相似文献   

7.
We consider an empirical Bayes approach to standard nonparametric regression estimation using a nonlinear wavelet methodology. Instead of specifying a single prior distribution on the parameter space of wavelet coefficients, which is usually the case in the existing literature, we elicit the ?-contamination class of prior distributions that is particularly attractive to work with when one seeks robust priors in Bayesian analysis. The type II maximum likelihood approach to prior selection is used by maximizing the predictive distribution for the data in the wavelet domain over a suitable subclass of the ?-contamination class of prior distributions. For the prior selected, the posterior mean yields a thresholding procedure which depends on one free prior parameter and it is level- and amplitude-dependent, thus allowing better adaptation in function estimation. We consider an automatic choice of the free prior parameter, guided by considerations on an exact risk analysis and on the shape of the thresholding rule, enabling the resulting estimator to be fully automated in practice. We also compute pointwise Bayesian credible intervals for the resulting function estimate using a simulation-based approach. We use several simulated examples to illustrate the performance of the proposed empirical Bayes term-by-term wavelet scheme, and we make comparisons with other classical and empirical Bayes term-by-term wavelet schemes. As a practical illustration, we present an application to a real-life data set that was collected in an atomic force microscopy study.  相似文献   

8.
This paper studies the estimation of a density in the convolution density model from strong mixing observations. The ordinary smooth case is considered. Adopting the minimax approach under the mean integrated square error over Besov balls, we explore the performances of two wavelet estimators: a linear one based on projections and a non-linear one based on a hard thresholding rule. The feature of the non-linear one is to be adaptive, i.e., it does not require any prior knowledge of the smoothness class of the unknown density in its construction. We prove that it attains a fast rate of convergence which corresponds to the optimal one obtained in the standard i.i.d. case up to a logarithmic term.  相似文献   

9.
We consider the estimation of a two dimensional continuous–discrete density function. A new methodology based on wavelets is proposed. We construct a linear wavelet estimator and a non-linear wavelet estimator based on a term-by-term thresholding. Their rates of convergence are established under the mean integrated squared error over Besov balls. In particular, we prove that our adaptive wavelet estimator attains a fast rate of convergence. A simulation study illustrates the usefulness of the proposed estimators.  相似文献   

10.
Abstract.  For the problem of estimating a sparse sequence of coefficients of a parametric or non-parametric generalized linear model, posterior mode estimation with a Subbotin( λ , ν ) prior achieves thresholding and therefore model selection when ν   ∈    [0,1] for a class of likelihood functions. The proposed estimator also offers a continuum between the (forward/backward) best subset estimator ( ν  =  0 ), its approximate convexification called lasso ( ν  =  1 ) and ridge regression ( ν  =  2 ). Rather than fixing ν , selecting the two hyperparameters λ and ν adds flexibility for a better fit, provided both are well selected from the data. Considering first the canonical Gaussian model, we generalize the Stein unbiased risk estimate, SURE( λ , ν ), to the situation where the thresholding function is not almost differentiable (i.e. ν    1 ). We then propose a more general selection of λ and ν by deriving an information criterion that can be employed for instance for the lasso or wavelet smoothing. We investigate some asymptotic properties in parametric and non-parametric settings. Simulations and applications to real data show excellent performance.  相似文献   

11.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

12.
We consider wavelet-based non linear estimators, which are constructed by using the thresholding of the empirical wavelet coefficients, for the mean regression functions with strong mixing errors and investigate their asymptotic rates of convergence. We show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes Bsp, q. The theory is illustrated with some numerical examples.

A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest.  相似文献   


13.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

14.
Wavelets are a commonly used tool in science and technology. Often, their use involves applying a wavelet transform to the data, thresholding the coefficients and applying the inverse transform to obtain an estimate of the desired quantities. In this paper, we argue that it is often possible to gain more insight into the data by producing not just one, but many wavelet reconstructions using a range of threshold values and analysing the resulting object, which we term the Time–Threshold Map (TTM) of the input data. We discuss elementary properties of the TTM, in its “basic” and “derivative” versions, using both Haar and Unbalanced Haar wavelet families. We then show how the TTM can help in solving two statistical problems in the signal + noise model: breakpoint detection, and estimating the longest interval of approximate stationarity. We illustrate both applications with examples involving volatility of financial returns. We also briefly discuss other possible uses of the TTM.  相似文献   

15.
In this article, we propose a denoising methodology in the wavelet domain based on a Bayesian hierarchical model using Double Weibull prior. We propose two estimators, one based on posterior mean (Double Weibull Wavelet Shrinker, DWWS) and the other based on larger posterior mode (DWWS-LPM), and show how to calculate them efficiently. Traditionally, mixture priors have been used for modeling sparse wavelet coefficients. The interesting feature of this article is the use of non-mixture prior. We show that the methodology provides good denoising performance, comparable even to state-of-the-art methods that use mixture priors and empirical Bayes setting of hyperparameters, which is demonstrated by extensive simulations on standardly used test functions. An application to real-word dataset is also considered.  相似文献   

16.
In recent years, wavelet shrinkage has become a very appealing method for data de-noising and density function estimation. In particular, Bayesian modelling via hierarchical priors has introduced novel approaches for Wavelet analysis that had become very popular, and are very competitive with standard hard or soft thresholding rules. In this sense, this paper proposes a hierarchical prior that is elicited on the model parameters describing the wavelet coefficients after applying a Discrete Wavelet Transformation (DWT). In difference to other approaches, the prior proposes a multivariate Normal distribution with a covariance matrix that allows for correlations among Wavelet coefficients corresponding to the same level of detail. In addition, an extra scale parameter is incorporated that permits an additional shrinkage level over the coefficients. The posterior distribution for this shrinkage procedure is not available in closed form but it is easily sampled through Markov chain Monte Carlo (MCMC) methods. Applications on a set of test signals and two noisy signals are presented.  相似文献   

17.
Summary.  Deconvolution problems are naturally represented in the Fourier domain, whereas thresholding in wavelet bases is known to have broad adaptivity properties. We study a method which combines both fast Fourier and fast wavelet transforms and can recover a blurred function observed in white noise with O { n    log ( n )2} steps. In the periodic setting, the method applies to most deconvolution problems, including certain 'boxcar' kernels, which are important as a model of motion blur, but having poor Fourier characteristics. Asymptotic theory informs the choice of tuning parameters and yields adaptivity properties for the method over a wide class of measures of error and classes of function. The method is tested on simulated light detection and ranging data suggested by underwater remote sensing. Both visual and numerical results show an improvement over competing approaches. Finally, the theory behind our estimation paradigm gives a complete characterization of the 'maxiset' of the method: the set of functions where the method attains a near optimal rate of convergence for a variety of L p loss functions.  相似文献   

18.
For a normal model with a conjugate prior, we provide an in-depth examination of the effects of the hyperparameters on the long-run frequentist properties of posterior point and interval estimates. Under an assumed sampling model for the data-generating mechanism, we examine how hyperparameter values affect the mean-squared error (MSE) of posterior means and the true coverage of credible intervals. We develop two types of hyperparameter optimality. MSE optimal hyperparameters minimize the MSE of posterior point estimates. Credible interval optimal hyperparameters result in credible intervals that have a minimum length while still retaining nominal coverage. A poor choice of hyperparameters has a worse consequence on the credible interval coverage than on the MSE of posterior point estimates. We give an example to demonstrate how our results can be used to evaluate the potential consequences of hyperparameter choices.  相似文献   

19.
Summary.  A typical microarray experiment attempts to ascertain which genes display differential expression in different samples. We model the data by using a two-component mixture model and develop an empirical Bayesian thresholding procedure, which was originally introduced for thresholding wavelet coefficients, as an alternative to the existing methods for determining differential expression across thousands of genes. The method is built on sound theoretical properties and has easy computer implementation in the R statistical package. Furthermore, we consider improvements to the standard empirical Bayesian procedure when replication is present, to increase the robustness and reliability of the method. We provide an introduction to microarrays for those who are unfamilar with the field and the proposed procedure is demonstrated with applications to two-channel complementary DNA microarray experiments.  相似文献   

20.
We study the maxiset performance of a large collection of block thresholding wavelet estimators, namely the horizontal block thresholding family. We provide sufficient conditions on the choices of rates and threshold values to ensure that the involved adaptive estimators obtain large maxisets. Moreover, we prove that any estimator of such a family reconstructs the Besov balls with a near‐minimax optimal rate that can be faster than the one of any separable thresholding estimator. Then, we identify, in particular cases, the best estimator of such a family, that is, the one associated with the largest maxiset. As a particularity of this paper, we propose a refined approach that models method‐dependent threshold values. By a series of simulation studies, we confirm the good performance of the best estimator by comparing it with the other members of its family.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号