共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We consider the problem of model selection based on quantile analysis and with unknown parameters estimated using quantile leasts squares. We propose a model selection test for the null hypothesis that the competing models are equivalent against the alternative hypothesis that one model is closer to the true model. We follow with two applications of the proposed model selection test. The first application is in model selection for time series with non-normal innovations. The second application is in model selection in the NoVas method, short for normalizing and variance stabilizing transformation, forecast. A set of simulation results also lends strong support to the results presented in the paper. 相似文献
3.
4.
《Journal of Statistical Computation and Simulation》2012,82(4):391-411
This paper considers the issue of estimating the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected. We perform Monte Carlo simulation on the White estimator, which is commonly used in. empirical research, and also on some alternatives based on different bootstrapping schemes. Our results reveal that the White estimator can be considerably biased when the sample size is not very large, that bias correction via bootstrap does not work well, and that the weighted bootstrap estimators tend to display smaller biases than the White estimator and its variants, under both homoskedasticity and heteroskedasticity. Our results also reveal that the presence of (potentially) influential observations in the design matrix plays an important role in the finite-sample performance of the heteroskedasticity-consistent estimators. 相似文献
5.
Chaohua Dong 《Econometric Reviews》2019,38(2):125-150
In this article, we develop a series estimation method for unknown time-inhomogeneous functionals of Lévy processes involved in econometric time series models. To obtain an asymptotic distribution for the proposed estimators, we establish a general asymptotic theory for partial sums of bivariate functionals of time and nonstationary variables. These results show that the proposed estimators in different situations converge to quite different random variables. In addition, the rates of convergence depend on various factors rather than just the sample size. Finite sample simulations are provided to evaluate the finite sample performance of the proposed model and estimation method. 相似文献
6.
Reinhard B?hm 《Statistical Methods and Applications》2012,21(3):335-339
This discussion focuses on threshold nonstationary?Cnonlinear time series modelling; it raises various issues to do with identifiability and model complexity. It also gives some background history concerning smooth threshold/transition autoregressive models and hidden Markov switching models. 相似文献
7.
Mariantonietta Ruggieri Antonella Plaia Francesca Di Salvo Gianna Agró 《Journal of applied statistics》2013,40(4):795-807
The knowledge of the urban air quality represents the first step to face air pollution issues. For the last decades many cities can rely on a network of monitoring stations recording concentration values for the main pollutants. This paper focuses on functional principal component analysis (FPCA) to investigate multiple pollutant datasets measured over time at multiple sites within a given urban area. Our purpose is to extend what has been proposed in the literature to data that are multisite and multivariate at the same time. The approach results to be effective to highlight some relevant statistical features of the time series, giving the opportunity to identify significant pollutants and to know the evolution of their variability along time. The paper also deals with missing value issue. As it is known, very long gap sequences can often occur in air quality datasets, due to long time failures not easily solvable or to data coming from a mobile monitoring station. In the considered dataset, large and continuous gaps are imputed by empirical orthogonal function procedure, after denoising raw data by functional data analysis and before performing FPCA, in order to further improve the reconstruction. 相似文献
8.
9.
The performance of the bootstrap method and the Edgeworth expansion in approximating the distribution of sample variance are compared when the data are from a non-normal population. Both approximations are very good. so long as the parent population is close to normal. 相似文献
10.
Lifetime Data Analysis - The aim of this study is to provide an analysis of gap event times under the illness–death model, where some subjects experience “illness” before... 相似文献
11.
12.
Y. Parmet 《统计学通讯:理论与方法》2017,46(15):7479-7494
There are two types of decompositions: of linear combinations of random variables into contributions of individual variables (sources) and associations between them, and of populations into contributions of their subpopulations. Simultaneous treatment of the two types is called for, which takes into account the correlations between sources within subpopulations and between subpopulation means. The expected values of the subcomponents are derived and their sensitivity to correlations among sources within groups and among source group means is conducted. An example is provided, in which the correlations contribute 20--25% to total variability. This additional information is hidden when decompositions are not simultaneous. 相似文献
13.
《Journal of Statistical Computation and Simulation》2012,82(3-4):255-271
We study a situation in which N independent classifications between N(- 1; 1) and N(1,1) are to be faced simultaneously. This problem was the featured example in Robbins′ (1951) introduction of the compound decision problem and has been used many times since to illustrate various aspects of the developing theory of compound and empirical Bayes decisions. We here study the moderate N risk behavior of recently developed so-called “extended” bootstrap and Bayes procedures for the prob!em. The behavicr of :hese rules is compared to that of the bootstrap and Bayes rules originally suggested by Robbins. 相似文献
14.
Domenico Piccolo 《Statistical Methods and Applications》2012,21(3):363-369
We discuss the scientific contribution of Battaglia and Protopapas?? paper concerning the debate on global warming supported by an extensive analysis of temperature time series in the Alpine region. In the work, Authors use several exploratory and modelling tools for assessing and discriminating the presence of different patterns in the data. We add some general and specific considerations mainly devoted to the modelling stage of their analysis. 相似文献
15.
M. Mudelsee 《Statistical Methods and Applications》2012,21(3):341-346
The paper by Battaglia and Protopapas (Stat Method Appl 2012) is stimulating. It gives an elegant mathematical generalization of autoregressive models (the nine types). It explains state-of-the-art model fitting techniques (genetic algorithm combined with fitness function and least squares). It is written in a fluent and authoritative manner. Important for having a wider impact: it is accessible to non-statisticians. Finally, it has interesting results on the temperature evolution over the instrumental period (roughly the past 200?years). These merits make this paper an important contribution to applied statistics as well as climatology. As a climate researcher, coming from Physics and having had an affiliation with a statistical institute only as postdoc, I re-analyse here three data series with the aim of providing motivation for model selection and interpreting the results from the climatological perspective. 相似文献
16.
Francesco Giordano Cira Perna Cosimo D. Vitale 《Statistical Methods and Applications》2012,21(3):355-361
We discuss the paper by Battaglia and Protopapas concerning the analysis of global warming phenomenon in the Alpine Region. The Authors consider a nonlinear model which takes into account regimes in time and levels. In this contribution some of the statistical results presented in the paper are commented and a different approach to the problem is proposed. It is based on a temporal aggregation analysis and it can help to highlight some features in the data. 相似文献
17.
P. Jagers 《Statistics》2013,47(4):455-464
For a suitable norm, conservation of the distance between expectation and hypothesis may furnish a basis for data reduction by invariance in the linear, not neces-sarily normal, model. If the norm is Euclidean (i.e. based on some inner product), the maximal invariant is a pair of sums of squares. This provides support for traditional χ2 (or F) - methods also in nonnormal cases. If the norm is lp p≠2, or the supnorm, the maximal invariant is, at the best a air of order statistics. 相似文献
18.
19.
Tommi Härkänen Juha Karvanen Hanna Tolonen Risto Lehtonen Kari Djerf Teppo Juntunen 《Journal of applied statistics》2016,43(15):2772-2790
We present a systematic approach to the practical and comprehensive handling of missing data motivated by our experiences of analyzing longitudinal survey data. We consider the Health 2000 and 2011 Surveys (BRIF8901) where increased non-response and non-participation from 2000 to 2011 was a major issue. The model assumptions involved in the complex sampling design, repeated measurements design, non-participation mechanisms and associations are presented graphically using methodology previously defined as a causal model with design, i.e. a functional causal model extended with the study design. This tool forces the statistician to make the study design and the missing-data mechanism explicit. Using the systematic approach, the sampling probabilities and the participation probabilities can be considered separately. This is beneficial when the performance of missing-data methods are to be compared. Using data from Health 2000 and 2011 Surveys and from national registries, it was found that multiple imputation removed almost all differences between full sample and estimated prevalences. The inverse probability weighting removed more than half and the doubly robust method 60% of the differences. These findings are encouraging since decreasing participation rates are a major problem in population surveys worldwide. 相似文献
20.
ABSTRACTFactor analysis (FA) is the most commonly used pattern recognition methodology in social and health research. A technique that may help to better retrieve true information from FA is the rotation of the information axes. The main goal is to test the reliability of the results derived through FA and to reveal the best rotation method under various scenarios. Based on the results of the simulations, it was observed that when applying non-orthogonal rotation, the results were more repeatable as compared to the orthogonal rotation, and, when no rotation was applied. 相似文献