首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15篇
  免费   0篇
统计学   15篇
  2019年   2篇
  2016年   1篇
  2015年   2篇
  2013年   3篇
  2011年   1篇
  2009年   1篇
  2007年   1篇
  2004年   1篇
  2001年   1篇
  2000年   1篇
  1998年   1篇
排序方式: 共有15条查询结果,搜索用时 109 毫秒
1.
Summary.  The paper considers the problem of estimating the entire temperature field for every location on the globe from scattered surface air temperatures observed by a network of weather-stations. Classical methods such as spherical harmonics and spherical smoothing splines are not efficient in representing data that have inherent multiscale structures. The paper presents an estimation method that can adapt to the multiscale characteristics of the data. The method is based on a spherical wavelet approach that has recently been developed for a multiscale representation and analysis of scattered data. Spatially adaptive estimators are obtained by coupling the spherical wavelets with different thresholding (selective reconstruction) techniques. These estimators are compared for their spatial adaptability and extrapolation performance by using the surface air temperature data.  相似文献   
2.
This paper deals with the classical problem of density estimation on the real line. Most of the existing papers devoted to minimax properties assume that the support of the underlying density is bounded and known. But this assumption may be very difficult to handle in practice. In this work, we show that, exactly as a curse of dimensionality exists when the data lie in Rd, there exists a curse of support as well when the support of the density is infinite. As for the dimensionality problem where the rates of convergence deteriorate when the dimension grows, the minimax rates of convergence may deteriorate as well when the support becomes infinite. This problem is not purely theoretical since the simulations show that the support-dependent methods are really affected in practice by the size of the density support, or by the weight of the density tail. We propose a method based on a biorthogonal wavelet thresholding rule that is adaptive with respect to the nature of the support and the regularity of the signal, but that is also robust in practice to this curse of support. The threshold, that is proposed here, is very accurately calibrated so that the gap between optimal theoretical and practical tuning parameters is almost filled.  相似文献   
3.
Nonparametric regression is considered where the sample point placement is not fixed and equispaced, but generated by a random process with rate n. Conditions are found for the random processes that result in optimal rates of convergence for nonparametric regression when using a block thresholded wavelet estimator. Previous results on nonparametric regression via wavelets on both fixed and random sample point placement are shown to be special cases of the general result given here. The estimator is adaptive over a large range of Hölder function spaces and the convergence rate exhibited is an improvement over term-by-term wavelet estimators. Threshold selection is implemented in a data-adaptive fashion, rather than using a fixed threshold as is usually done in block thresholding. This estimator, BlockSure, is compared against fixed-threshold block estimators and the more traditional term-by-term threshold wavelet estimators on several random design schemes via simulations.  相似文献   
4.
A wavelet method is proposed that reduces function estimation error and provides smooth reconstructions, while still estimating jumps in the function well. It is based on analyzing multiple dilated versions of the sampled function. In simulation studies, the estimator exhibits low mean squared errors without sacrificing smoothness or jump detection ability when compared to other wavelet methods.  相似文献   
5.
By introducing the idea of thresholding function matching, it is illustrated that both bridge penalty and log penalty can be transformed so as to circumvent certain difficulties in numerical computation and the definition of local minimality. The fact that both bridge penalty and log penalty have derivatives going to infinity at zero. This hinders their applications in statistics although it is reported in the literature that they allow recovery of sparse structure in the data under some conditions. It is illustrated in the simulation studies that in the variable selection problems, penalized likelihood estimation based on the transformed penalty obtained by the proposed thresholding function matching method outperform those based on many other state-of-art penalties, particularly when the covariates are strongly correlated. The one-to-one correspondence between the transformed penalties and their thresholding functions are also established.  相似文献   
6.
We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion that is common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relationship between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relationship gives insight into the meaning of the Besov space parameters. Moreover, the relationship established makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be difficult to elicit; with this in mind, we propose a standard choice of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set that was collected in an anaesthesiological study.  相似文献   
7.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   
8.
We study confidence intervals based on hard-thresholding, soft-thresholding, and adaptive soft-thresholding in a linear regression model where the number of regressors k may depend on and diverge with sample size n. In addition to the case of known error variance, we define and study versions of the estimators when the error variance is unknown. In the known-variance case, we provide an exact analysis of the coverage properties of such intervals in finite samples. We show that these intervals are always larger than the standard interval based on the least-squares estimator. Asymptotically, the intervals based on the thresholding estimators are larger even by an order of magnitude when the estimators are tuned to perform consistent variable selection. For the unknown-variance case, we provide nontrivial lower bounds and a small numerical study for the coverage probabilities in finite samples. We also conduct an asymptotic analysis where the results from the known-variance case can be shown to carry over asymptotically if the number of degrees of freedom n ? k tends to infinity fast enough in relation to the thresholding parameter.  相似文献   
9.
刘丽萍等 《统计研究》2015,32(6):105-112
大维数据给传统的协方差阵估计方法带来了巨大的挑战,数据维度和噪声的影响不容忽视。本文将主成分和门限方法有效的结合,应用到DCC模型的估计中,提出了基于主成分正交补门限方法的DCC模型(poetDCC)。poetDCC模型主要通过前K个主成分来刻画高维动态条件协方差阵的信息,然后将门限函数应用在矩阵的正交补中,有效的降低了数据的维度并剔除了噪声的影响。通过模拟和实证研究发现:较DCC模型而言,poetDCC模型明显提高了高维协方差阵的估计和预测效率;并且将其应用在投资组合时,投资者获得了更高的投资收益和经济福利。  相似文献   
10.
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean-squared error (MSE) properties, the selection of an effective prior is a difficult task. To address this problem, we propose empirical Bayes (EB) prior selection methods for various error distributions including the normal and the heavier-tailed Student t -distributions. Under such EB prior distributions, we obtain threshold shrinkage estimators based on model selection, and multiple-shrinkage estimators based on model averaging. These EB estimators are seen to be computationally competitive with standard classical thresholding methods, and to be robust to outliers in both the data and wavelet domains. Simulated and real examples are used to illustrate the flexibility and improved MSE performance of these methods in a wide variety of settings.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号