首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An important aspect in the modelling of biological phenomena in living organisms, whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., is time variation in the data. Thus, the recovery of a 'smooth' regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here, non-stationarity means that the spectral behaviour of the noise is allowed to change slowly over time). We develop a procedure to adapt existing threshold rules to such situations, e.g. that of a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the -risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but stationary errors. A natural application of our approach is the estimation of the trend function of non-stationary time series under the model of local stationarity. The method is illustrated on both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear non-stationarity in its variance.  相似文献   

2.
This article introduces a fast cross-validation algorithm that performs wavelet shrinkage on data sets of arbitrary size and irregular design and also simultaneously selects good values of the primary resolution and number of vanishing moments.We demonstrate the utility of our method by suggesting alternative estimates of the conditional mean of the well-known Ethanol data set. Our alternative estimates outperform the Kovac-Silverman method with a global variance estimate by 25% because of the careful selection of number of vanishing moments and primary resolution. Our alternative estimates are simpler than, and competitive with, results based on the Kovac-Silverman algorithm equipped with a local variance estimate.We include a detailed simulation study that illustrates how our cross-validation method successfully picks good values of the primary resolution and number of vanishing moments for unknown functions based on Walsh functions (to test the response to changing primary resolution) and piecewise polynomials with zero or one derivative (to test the response to function smoothness).  相似文献   

3.
A wavelet method is proposed to detect jumps in a function which is observed with unit-root noise. We obtain critical values at any scale and prove the consistency of wavelet detection when the nonparametric function is smooth. It shows that the estimation of the number and locations of change points are consistent when there are change points in the nonparametric function. Simulation study supports our method.  相似文献   

4.
In this paper, we investigate the use of wavelet techniques in the study of the nth order fractional Brownian motion (n-fBm). First, we exploit the continuous wavelet transform??s capabilities in derivative calculation to construct a two-step estimator of the scaling exponent of the n-fBm process. We show, via simulation, that the proposed method improves the estimation performance of the n-fBm signals contaminated by large-scale noise. Second, we analyze the statistical properties of the n-fBm process in the time-scale plan. We demonstrate that, for a convenient choice of the wavelet basis, the discrete wavelet detail coefficients of the n-fBm process are stationary at each resolution level whereas their variance exhibits a power-law behavior. Using the latter property, we discuss a weighted least squares regression based-estimator for this class of stochastic process. Experiments carried out on simulated and real-world datasets prove the relevance of the proposed method.  相似文献   

5.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   

6.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   

7.
In this article, we propose an outlier detection approach in a multiple regression model using the properties of a difference-based variance estimator. This type of a difference-based variance estimator was originally used to estimate error variance in a non parametric regression model without estimating a non parametric function. This article first employed a difference-based error variance estimator to study the outlier detection problem in a multiple regression model. Our approach uses the leave-one-out type method based on difference-based error variance. The existing outlier detection approaches using the leave-one-out approach are highly affected by other outliers, while ours is not because our approach does not use the regression coefficient estimator. We compared our approach with several existing methods using a simulation study, suggesting the outperformance of our approach. The advantages of our approach are demonstrated using a real data application. Our approach can be extended to the non parametric regression model for outlier detection.  相似文献   

8.
Summary. We use cumulants to derive Bayesian credible intervals for wavelet regression estimates. The first four cumulants of the posterior distribution of the estimates are expressed in terms of the observed data and integer powers of the mother wavelet functions. These powers are closely approximated by linear combinations of wavelet scaling functions at an appropriate finer scale. Hence, a suitable modification of the discrete wavelet transform allows the posterior cumulants to be found efficiently for any given data set. Johnson transformations then yield the credible intervals themselves. Simulations show that these intervals have good coverage rates, even when the underlying function is inhomogeneous, where standard methods fail. In the case where the curve is smooth, the performance of our intervals remains competitive with established nonparametric regression methods.  相似文献   

9.
We consider the variance estimation of the weighted likelihood estimator (WLE) under two‐phase stratified sampling without replacement. Asymptotic variance of the WLE in many semiparametric models contains unknown functions or does not have a closed form. The standard method of the inverse probability weighted (IPW) sample variances of an estimated influence function is then not available in these models. To address this issue, we develop the variance estimation procedure for the WLE in a general semiparametric model. The phase I variance is estimated by taking a numerical derivative of the IPW log likelihood. The phase II variance is estimated based on the bootstrap for a stratified sample in a finite population. Despite a theoretical difficulty of dependent observations due to sampling without replacement, we establish the (bootstrap) consistency of our estimators. Finite sample properties of our method are illustrated in a simulation study.  相似文献   

10.
This paper deals with the nonparametric estimation of the mean and variance functions of univariate time series data. We propose a nonparametric dimension reduction technique for both mean and variance functions of time series. This method does not require any model specification and instead we seek directions in both the mean and variance functions such that the conditional distribution of the current observation given the vector of past observations is the same as that of the current observation given a few linear combinations of the past observations without loss of inferential information. The directions of the mean and variance functions are estimated by maximizing the Kullback–Leibler distance function. The consistency of the proposed estimators is established. A computational procedure is introduced to detect lags of the conditional mean and variance functions in practice. Numerical examples and simulation studies are performed to illustrate and evaluate the performance of the proposed estimators.  相似文献   

11.
We propose an elementary model for the way in which stochastic perturbations of a statistical objective function, such as a negative log-likelihood, produce excessive nonlinear variation of the resulting estimator. Theory for the model is transparently simple, and is used to provide new insight into the main factors that affect performance of bagging. In particular, it is shown that if the perturbations are sufficiently symmetric then bagging will not significantly increase bias; and if the perturbations also offer opportunities for cancellation then bagging will reduce variance. For the first property it is sufficient that the third derivative of a perturbation vanish locally, and for the second, that second and fourth derivatives have opposite signs. Functions that satisfy these conditions resemble sinusoids. Therefore, our results imply that bagging will reduce the nonlinear variation, as measured by either variance or mean-squared error, produced in an estimator by sinusoid-like, stochastic perturbations of the objective function. Analysis of our simple model also suggests relationships between the results obtained using different with-replacement and without-replacement bagging schemes. We simulate regression trees in settings that are far more complex than those explicitly addressed by the model, and find that these relationships are generally borne out.  相似文献   

12.
Abstract.  The paper proposes a method of deconvolution in a periodic setting which combines two important ideas, the fast wavelet and Fourier transform-based estimation procedure of Johnstone et al . [ J. Roy. Statist. Soc. Ser. B 66 (2004) 547] and the multichannel system technique proposed by Casey and Walnut [ SIAM Rev . 36 (1994) 537]. An unknown function is estimated by a wavelet series where the empirical wavelet coefficients are filtered in an adapting non-linear fashion. It is shown theoretically that the estimator achieves optimal convergence rate in a wide range of Besov spaces. The procedure allows to reduce the ill-posedness of the problem especially in the case of non-smooth blurring functions such as boxcar functions: it is proved that additions of extra channels improve convergence rate of the estimator. Theoretical study is supplemented by an extensive set of small-sample simulation experiments demonstrating high-quality performance of the proposed method.  相似文献   

13.
We propose a method in order to maximize the accuracy in the estimation of piecewise constant and piecewise smooth variance functions in a nonparametric heteroscedastic fixed design regression model. The difference-based initial estimates are obtained from the given observations. Then an estimator is constructed by using iterative regularization method with the analysis-prior undecimated three-level Haar transform as regularizer term. We notice that this method shows better results in the mean square sense over an existing adaptive estimation procedure considering all the standard test functions used in addition to the functions that we target. Some simulations and comparisons with other methods are conducted to assess the performance of the proposed method.  相似文献   

14.
We present a new method to describe shape change and shape differences in curves, by constructing a deformation function in terms of a wavelet decomposition. Wavelets form an orthonormal basis which allows representations at multiple resolutions. The deformation function is estimated, in a fully Bayesian framework, using a Markov chain Monte Carlo algorithm. This Bayesian formulation incorporates prior information about the wavelets and the deformation function. The flexibility of the MCMC approach allows estimation of complex but clinically important summary statistics, such as curvature in our case, as well as estimates of deformation functions with variance estimates, and allows thorough investigation of the posterior distribution. This work is motivated by multi-disciplinary research involving a large-scale longitudinal study of idiopathic scoliosis in UK children. This paper provides novel statistical tools to study this spinal deformity, from which 5% of UK children suffer. Using the data we consider statistical inference for shape differences between normals, scoliotics and developers of scoliosis, in particular for spinal curvature, and look at longitudinal deformations to describe shape changes with time.  相似文献   

15.
Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2J for some J. These methods require serious modification or preprocessed data to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a multiscale analysis for irregularly spaced data. A key lifting component is the “predict” step where a prediction of a data point is made. The residual from the prediction is stored and can be thought of as a wavelet coefficient. This article exploits the flexibility of lifting by adaptively choosing the kind of prediction according to a criterion. In this way the smoothness of the underlying ‘wavelet’ can be adapted to the local properties of the function. Multiple observations at a point can readily be handled by lifting through a suitable choice of prediction. We adapt existing shrinkage rules to work with our adaptive lifting methods. We use simulation to demonstrate the improved sparsity of our techniques and improved regression performance when compared to both wavelet and non-wavelet methods suitable for irregular data. We also exhibit the benefits of our adaptive lifting on the real inductance plethysmography and motorcycle data.  相似文献   

16.
We consider a process that is observed as a mixture of two random distributions, where the mixing probability is an unknown function of time. The setup is built upon a wavelet‐based mixture regression. Two linear wavelet estimators are proposed. Furthermore, we consider three regularizing procedures for each of the two wavelet methods. We also discuss regularity conditions under which the consistency of the wavelet methods is attained and derive rates of convergence for the proposed estimators. A Monte Carlo simulation study is conducted to illustrate the performance of the estimators. Various scenarios for the mixing probability function are used in the simulations, in addition to a range of sample sizes and resolution levels. We apply the proposed methods to a data set consisting of array Comparative Genomic Hybridization from glioblastoma cancer studies.  相似文献   

17.
We present theoretical results on the random wavelet coefficients covariance structure. We use simple properties of the coefficients to derive a recursive way to compute the within- and across-scale covariances. We point out a useful link between the algorithm proposed and the two-dimensional discrete wavelet transform. We then focus on Bayesian wavelet shrinkage for estimating a function from noisy data. A prior distribution is imposed on the coefficients of the unknown function. We show how our findings on the covariance structure make it possible to specify priors that take into account the full correlation between coefficients through a parsimonious number of hyperparameters. We use Markov chain Monte Carlo methods to estimate the parameters and illustrate our method on bench-mark simulated signals.  相似文献   

18.
We can use wavelet shrinkage to estimate a possibly multivariate regression function g under the general regression setup, y = g + ε. We propose an enhanced wavelet-based denoising methodology based on Bayesian adaptive multiresolution shrinkage, an effective Bayesian shrinkage rule in addition to the semi-supervised learning mechanism. The Bayesian shrinkage rule is advanced by utilizing the semi-supervised learning method in which the neighboring structure of a wavelet coefficient is adopted and an appropriate decision function is derived. According to decision function, wavelet coefficients follow one of two prespecified Bayesian rules obtained using varying related parameters. The decision of a wavelet coefficient depends not only on its magnitude, but also on the neighboring structure on which the coefficient is located. We discuss the theoretical properties of the suggested method and provide recommended parameter settings. We show that the proposed method is often superior to several existing wavelet denoising methods through extensive experimentation.  相似文献   

19.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best’ representation, which we shall explore.  相似文献   

20.
ABSTRACT

In this paper, we develop an efficient wavelet-based regularized linear quantile regression framework for coefficient estimations, where the responses are scalars and the predictors include both scalars and function. The framework consists of two important parts: wavelet transformation and regularized linear quantile regression. Wavelet transform can be used to approximate functional data through representing it by finite wavelet coefficients and effectively capturing its local features. Quantile regression is robust for response outliers and heavy-tailed errors. In addition, comparing with other methods it provides a more complete picture of how responses change conditional on covariates. Meanwhile, regularization can remove small wavelet coefficients to achieve sparsity and efficiency. A novel algorithm, Alternating Direction Method of Multipliers (ADMM) is derived to solve the optimization problems. We conduct numerical studies to investigate the finite sample performance of our method and applied it on real data from ADHD studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号