首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In singular spectrum analysis (SSA) window length is a critical tuning parameter that must be assigned by the practitioner. This paper provides a theoretical analysis of signal–noise separation and time series reconstruction in SSA that can serve as a guide to optimal window choice. We establish numerical bounds on the mean squared reconstruction error and present their almost sure limits under very general regularity conditions on the underlying data generating mechanism. We also provide asymptotic bounds for the mean squared separation error. Evidence obtained using simulation experiments and real data sets indicates that the theoretical properties are reflected in observed behaviour, even in relatively small samples, and the results indicate how, in practice, an optimal assignment for the window length can be made.  相似文献   

2.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

3.
4.
The focused information criterion for model selection is constructed to select the model that best estimates a particular quantity of interest, the focus, in terms of mean squared error. We extend this focused selection process to the high‐dimensional regression setting with potentially a larger number of parameters than the size of the sample. We distinguish two cases: (i) the case where the considered submodel is of low dimension and (ii) the case where it is of high dimension. In the former case, we obtain an alternative expression of the low‐dimensional focused information criterion that can directly be applied. In the latter case, we use a desparsified estimator that allows us to derive the mean squared error of the focus estimator. We illustrate the performance of the high‐dimensional focused information criterion with a numerical study and a real dataset.  相似文献   

5.
6.
7.
8.
9.
This paper describes an estimation of the time delay between two stationary time series signals, in which an input signal is measured with little noise and an output signal is the sum of a noise and the response from a linear system. We use the Hilbert transform relation for minimum delay systems to estimate the time delay. Some computer simulation results are given to evaluate the performance of the proposed method.  相似文献   

10.
11.
We consider bridge regression models, which can produce a sparse or non-sparse model by controlling a tuning parameter in the penalty term. A crucial part of a model building strategy is the selection of the values for adjusted parameters, such as regularization and tuning parameters. Indeed, this can be viewed as a problem in selecting and evaluating the model. We propose a Bayesian selection criterion for evaluating bridge regression models. This criterion enables us to objectively select the values of the adjusted parameters. We investigate the effectiveness of our proposed modeling strategy with some numerical examples.  相似文献   

12.
Image processing through multiscale analysis and measurement noise modeling   总被引:2,自引:0,他引:2  
We describe a range of powerful multiscale analysis methods. We also focus on the pivotal issue of measurement noise in the physical sciences. From multiscale analysis and noise modeling, we develop a comprehensive methodology for data analysis of 2D images, 1D signals (or spectra), and point pattern data. Noise modeling is based on the following: (i) multiscale transforms, including wavelet transforms; (ii) a data structure termed the multiresolution support; and (iii) multiple scale significance testing. The latter two aspects serve to characterize signal with respect to noise. The data analysis objectives we deal with include noise filtering and scale decomposition for visualization or feature detection.  相似文献   

13.
14.
We discuss a method of weighting the likelihood equations with the aim of obtaining fully efficient and robust estimators. We discuss the case of discrete probability models using several weighting functions. If the weight functions generate increasing residual adjustment functions then the method provides a link between the maximum likelihood score equations and minimum disparity estimation, as well as a set of diagnostic weights and a goodness of fit criterion. However, when the weights do not generate increasing residual adjustment functions a selection criterion is needed to obtain the robust root.The weight functions discussed in this paper do not automatically downweight a proportion of the data; an observation is significantly downweighted only if it is inconsistent with the assumed model. At the true model, therefore, the proposed estimating equations behave like the ordinary likelihood equations. We apply our results to several discrete models; in addition, a toxicology experiment illustrates the method in the context of logistic regression.  相似文献   

15.
Recent literature provides many computational and modeling approaches for covariance matrices estimation in a penalized Gaussian graphical models but relatively little study has been carried out on the choice of the tuning parameter. This paper tries to fill this gap by focusing on the problem of shrinkage parameter selection when estimating sparse precision matrices using the penalized likelihood approach. Previous approaches typically used K-fold cross-validation in this regard. In this paper, we first derived the generalized approximate cross-validation for tuning parameter selection which is not only a more computationally efficient alternative, but also achieves smaller error rate for model fitting compared to leave-one-out cross-validation. For consistency in the selection of nonzero entries in the precision matrix, we employ a Bayesian information criterion which provably can identify the nonzero conditional correlations in the Gaussian model. Our simulations demonstrate the general superiority of the two proposed selectors in comparison with leave-one-out cross-validation, 10-fold cross-validation and Akaike information criterion.  相似文献   

16.
Bootstrap smoothed (bagged) parameter estimators have been proposed as an improvement on estimators found after preliminary data‐based model selection. A result of Efron in 2014 is a very convenient and widely applicable formula for a delta method approximation to the standard deviation of the bootstrap smoothed estimator. This approximation provides an easily computed guide to the accuracy of this estimator. In addition, Efron considered a confidence interval centred on the bootstrap smoothed estimator, with width proportional to the estimate of this approximation to the standard deviation. We evaluate this confidence interval in the scenario of two nested linear regression models, the full model and a simpler model, and a preliminary test of the null hypothesis that the simpler model is correct. We derive computationally convenient expressions for the ideal bootstrap smoothed estimator and the coverage probability and expected length of this confidence interval. In terms of coverage probability, this confidence interval outperforms the post‐model‐selection confidence interval with the same nominal coverage and based on the same preliminary test. We also compare the performance of the confidence interval centred on the bootstrap smoothed estimator, in terms of expected length, to the usual confidence interval, with the same minimum coverage probability, based on the full model.  相似文献   

17.
ABSTRACT

We introduce a new methodology for estimating the parameters of a two-sided jump model, which aims at decomposing the daily stock return evolution into (unobservable) positive and negative jumps as well as Brownian noise. The parameters of interest are the jump beta coefficients which measure the influence of the market jumps on the stock returns, and are latent components. For this purpose, at first we use the Variance Gamma (VG) distribution which is frequently used in modeling financial time series and leads to the revelation of the hidden market jumps' distributions. Then, our method is based on the central moments of the stock returns for estimating the parameters of the model. It is proved that the proposed method provides always a solution in terms of the jump beta coefficients. We thus achieve a semi-parametric fit to the empirical data. The methodology itself serves as a criterion to test the fit of any sets of parameters to the empirical returns. The analysis is applied to NASDAQ and Google returns during the 2006–2008 period.  相似文献   

18.
19.
This paper presents a new criterion for selecting a two-level fractional factorial design. The theoretical underpinning for the criterion is the Shannon entropy. The criterion, which is referred to as the entropy-based minimum aberration criterion, has several advantages. The advantage of the entropy-based criterion over the classical minimum aberration criterion is that it utilizes a measure of uncertainty on the skewness of the distribution of word length patterns in the selection of the “best” design in a family of two-level fractional factorial plans. The criterion evades the trauma associated with the lack of prior knowledge on the important effects.  相似文献   

20.
Abstract. Zero‐inflated data abound in ecological studies as well as in other scientific fields. Non‐parametric regression with zero‐inflated response may be studied via the zero‐inflated generalized additive model (ZIGAM) with a probabilistic mixture distribution of zero and a regular exponential family component. We propose the (partially) constrained ZIGAM, which assumes that some covariates affect the probability of non‐zero‐inflation and the regular exponential family distribution mean proportionally on the link scales. When the assumption obtains, the new approach provides a unified framework for modelling zero‐inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We develop an iterative estimation algorithm, and discuss the confidence interval construction of the estimator. Some asymptotic properties are derived. We also propose a Bayesian model selection criterion for choosing between the unconstrained and constrained ZIGAMs. The new methods are illustrated with both simulated data and a real application in jellyfish abundance data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号