首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article considers the maximum likelihood estimation (MLE) of a class of stationary and invertible vector autoregressive fractionally integrated moving-average (VARFIMA) processes considered in Equation (26) of Luceño [A fast likelihood approximation for vector general linear processes with long series: Application to fractional differencing, Biometrika 83 (1996), pp. 603–614] or Model A of Lobato [Consistency of the averaged cross-periodogram in long memory series, J. Time Ser. Anal. 18 (1997), pp. 137–155] where each component y i, t is a fractionally integrated process of order d i , i=1, …, r. Under the conditions outlined in Assumption 1 of this article, the conditional likelihood function of this class of VARFIMA models can be efficiently and exactly calculated with a conditional likelihood Durbin–Levinson (CLDL) algorithm proposed herein. This CLDL algorithm is based on the multivariate Durbin–Levinson algorithm of Whittle [On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral density matrix, Biometrika 50 (1963), pp. 129–134] and the conditional likelihood principle of Box and Jenkins [Time Series Analysis, Forecasting, and Control, 2nd ed., Holden-Day, San Francisco, CA]. Furthermore, the conditions in the aforementioned Assumption 1 are general enough to include the model considered in Andersen et al. [Modeling and forecasting realized volatility, Econometrica 71 (2003), 579–625] for describing the behaviour of realized volatility and the model studied in Haslett and Raftery [Space–time modelling with long-memory dependence: Assessing Ireland's wind power resource, Appl. Statist. 38 (1989), pp. 1–50] for spatial data as its special cases. As the computational cost of implementing the CLDL algorithm is much lower than that of using the algorithms proposed in Sowell [Maximum likelihood estimation of fractionally integrated time series models, Working paper, Carnegie-Mellon University], we are thus able to conduct a Monte Carlo experiment to investigate the finite sample performance of the CLDL algorithm for the 3-dimensional VARFIMA processes with the sample size of 400. The simulation results are very satisfactory and reveal the great potentials of using the CLDL method for empirical applications.  相似文献   

2.
In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296–1301] and on the Durbin–Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.  相似文献   

3.
In this paper, we consider the problem of robust estimation of the fractional parameter, d, in long memory autoregressive fractionally integrated moving average processes, when two types of outliers, i.e. additive and innovation, are taken into account without knowing their number, position or intensity. The proposed method is a weighted likelihood estimation (WLE) approach for which needed definitions and algorithm are given. By an extensive Monte Carlo simulation study, we compare the performance of the WLE method with the performance of both the approximated maximum likelihood estimation (MLE) and the robust M-estimator proposed by Beran (Statistics for Long-Memory Processes, Chapman & Hall, London, 1994). We find that robustness against the two types of considered outliers can be achieved without loss of efficiency. Moreover, as a byproduct of the procedure, we can classify the suspicious observations in different kinds of outliers. Finally, we apply the proposed methodology to the Nile River annual minima time series.  相似文献   

4.
Efficiency and robustness are two fundamental concepts in parametric estimation problems. It was long thought that there was an inherent contradiction between the aims of achieving robustness and efficiency; that is, a robust estimator could not be efficient and vice versa. It is now known that the minimum Hellinger distance approached introduced by Beran [R. Beran, Annals of Statistics 1977;5:445–463] is one way of reconciling the conflicting concepts of efficiency and robustness. For parametric models, it has been shown that minimum Hellinger estimators achieve efficiency at the model density and simultaneously have excellent robustness properties. In this article, we examine the application of this approach in two semiparametric models. In particular, we consider a two‐component mixture model and a two‐sample semiparametric model. In each case, we investigate minimum Hellinger distance estimators of finite‐dimensional Euclidean parameters of particular interest and study their basic asymptotic properties. Small sample properties of the proposed estimators are examined using a Monte Carlo study. The results can be extended to semiparametric models of general form as well. The Canadian Journal of Statistics 37: 514–533; 2009 © 2009 Statistical Society of Canada  相似文献   

5.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

6.
An important problem in statistics is the study of longitudinal data taking into account the effect of other explanatory variables such as treatments and time. In this paper, a new Bayesian approach for analysing longitudinal data is proposed. This innovative approach takes into account the possibility of having nonlinear regression structures on the mean and linear regression structures on the variance–covariance matrix of normal observations, and it is based on the modelling strategy suggested by Pourahmadi [M. Pourahmadi, Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterizations, Biometrika, 87 (1999), pp. 667–690.]. We initially extend the classical methodology to accommodate the fitting of nonlinear mean models then we propose our Bayesian approach based on a generalization of the Metropolis–Hastings algorithm of Cepeda [E.C. Cepeda, Variability modeling in generalized linear models, Unpublished Ph.D. Thesis, Mathematics Institute, Universidade Federal do Rio de Janeiro, 2001]. Finally, we illustrate the proposed methodology by analysing one example, the cattle data set, that is used to study cattle growth.  相似文献   

7.
Jingjing Wu 《Statistics》2015,49(4):711-740
The successful application of the Hellinger distance approach to fully parametric models is well known. The corresponding optimal estimators, known as minimum Hellinger distance (MHD) estimators, are efficient and have excellent robustness properties [Beran R. Minimum Hellinger distance estimators for parametric models. Ann Statist. 1977;5:445–463]. This combination of efficiency and robustness makes MHD estimators appealing in practice. However, their application to semiparametric statistical models, which have a nuisance parameter (typically of infinite dimension), has not been fully studied. In this paper, we investigate a methodology to extend the MHD approach to general semiparametric models. We introduce the profile Hellinger distance and use it to construct a minimum profile Hellinger distance estimator of the finite-dimensional parameter of interest. This approach is analogous in some sense to the profile likelihood approach. We investigate the asymptotic properties such as the asymptotic normality, efficiency, and adaptivity of the proposed estimator. We also investigate its robustness properties. We present its small-sample properties using a Monte Carlo study.  相似文献   

8.
In the literature, different optimality criteria have been considered for model identification. Most of the proposals assume the normal distribution for the response variable and thus they provide optimality criteria for discriminating between regression models. In this paper, a max–min approach is followed to discriminate among competing statistical models (i.e., probability distribution families). More specifically, k different statistical models (plausible for the data) are embedded in a more general model, which includes them as particular cases. The proposed optimal design maximizes the minimum KL-efficiency to discriminate between each rival model and the extended one. An equivalence theorem is proved and an algorithm is derived from it, which is useful to compute max–min KL-efficiency designs. Finally, the algorithm is run on two illustrative examples.  相似文献   

9.
In this paper, we study the statistical inference based on the Bayesian approach for regression models with the assumption that independent additive errors follow normal, Student-t, slash, contaminated normal, Laplace or symmetric hyperbolic distribution, where both location and dispersion parameters of the response variable distribution include nonparametric additive components approximated by B-splines. This class of models provides a rich set of symmetric distributions for the model error. Some of these distributions have heavier or lighter tails than the normal as well as different levels of kurtosis. In order to draw samples of the posterior distribution of the interest parameters, we propose an efficient Markov Chain Monte Carlo (MCMC) algorithm, which combines Gibbs sampler and Metropolis–Hastings algorithms. The performance of the proposed MCMC algorithm is assessed through simulation experiments. We apply the proposed methodology to a real data set. The proposed methodology is implemented in the R package BayesGESM using the function gesm().  相似文献   

10.
The class of beta regression models proposed by Ferrari and Cribari-Neto [Beta regression for modelling rates and proportions, Journal of Applied Statistics 31 (2004), pp. 799–815] is useful for modelling data that assume values in the standard unit interval (0, 1). The dependent variable relates to a linear predictor that includes regressors and unknown parameters through a link function. The model is also indexed by a precision parameter, which is typically taken to be constant for all observations. Some authors have used, however, variable dispersion beta regression models, i.e., models that include a regression submodel for the precision parameter. In this paper, we show how to perform testing inference on the parameters that index the mean submodel without having to model the data precision. This strategy is useful as it is typically harder to model dispersion effects than mean effects. The proposed inference procedure is accurate even under variable dispersion. We present the results of extensive Monte Carlo simulations where our testing strategy is contrasted to that in which the practitioner models the underlying dispersion and then performs testing inference. An empirical application that uses real (not simulated) data is also presented and discussed.  相似文献   

11.
This study proposes a class of non-linear realized stochastic volatility (SV) model by applying the Box–Cox (BC) transformation, instead of the logarithmic transformation, to the realized estimator. The non-Gaussian distributions such as Student's t, non-central Student's t, and generalized hyperbolic skew Student's t-distributions are applied to accommodate heavy-tailedness and skewness in returns. The proposed models are fitted to daily returns and realized kernel of six stocks: SP500, FTSE100, Nikkei225, Nasdaq100, DAX, and DJIA using an Markov chain Monte Carlo Bayesian method, in which the Hamiltonian Monte Carlo (HMC) algorithm updates BC parameter and the Riemann manifold HMC algorithm updates latent variables and other parameters that are unable to be sampled directly. Empirical studies provide evidence against both the logarithmic transformation and raw versions of realized SV model.  相似文献   

12.
This paper investigates methodologies for evaluating the probabilistic value (P-value) of the Kolmogorov–Smirnov (K–S) goodness-of-fit test using algorithmic program development implemented in Microsoft® Visual Basic® (VB). Six methods were examined for the one-sided one-sample and two methods for the two-sided one-sample cumulative sampling distributions in the investigative software implementation that was based on machine-precision arithmetic. For sample sizes n≤2000 considered, results from the Smirnov iterative method found optimal accuracy for K–S P-values≥0.02, while those from the SmirnovD were more accurate for lower P-values for the one-sided one-sample distribution statistics. Also, the Durbin matrix method sustained better P-value results than the Durbin recursion method for the two-sided one-sample tests up to n≤700 sample sizes. Based on these results, an algorithm for Microsoft Excel® function was proposed from which a model function was developed and its implementation was used to test the performance of engineering students in a general engineering course across seven departments.  相似文献   

13.
Measurement error models constitute a wide class of models that include linear and nonlinear regression models. They are very useful to model many real-life phenomena, particularly in the medical and biological areas. The great advantage of these models is that, in some sense, they can be represented as mixed effects models, allowing us to implement well-known techniques, like the EM-algorithm for the parameter estimation. In this paper, we consider a class of multivariate measurement error models where the observed response and/or covariate are not fully observed, i.e., the observations are subject to certain threshold values below or above which the measurements are not quantifiable. Consequently, these observations are considered censored. We assume a Student-t distribution for the unobserved true values of the mismeasured covariate and the error term of the model, providing a robust alternative for parameter estimation. Our approach relies on a likelihood-based inference using an EM-type algorithm. The proposed method is illustrated through some simulation studies and the analysis of an AIDS clinical trial dataset.  相似文献   

14.
We present an algorithm for multivariate robust Bayesian linear regression with missing data. The iterative algorithm computes an approximative posterior for the model parameters based on the variational Bayes (VB) method. Compared to the EM algorithm, the VB method has the advantage that the variance for the model parameters is also computed directly by the algorithm. We consider three families of Gaussian scale mixture models for the measurements, which include as special cases the multivariate t distribution, the multivariate Laplace distribution, and the contaminated normal model. The observations can contain missing values, assuming that the missing data mechanism can be ignored. A Matlab/Octave implementation of the algorithm is presented and applied to solve three reference examples from the literature.  相似文献   

15.
Doubly censored failure time data occur in many areas including demographical studies, epidemiology studies, medical studies and tumorigenicity experiments, and correspondingly some inference procedures have been developed in the literature (Biometrika, 91, 2004, 277; Comput. Statist. Data Anal., 57, 2013, 41; J. Comput. Graph. Statist., 13, 2004, 123). In this paper, we discuss regression analysis of such data under a class of flexible semiparametric transformation models, which includes some commonly used models for doubly censored data as special cases. For inference, the non‐parametric maximum likelihood estimation will be developed and in particular, we will present a novel expectation–maximization algorithm with the use of subject‐specific independent Poisson variables. In addition, the asymptotic properties of the proposed estimators are established and an extensive simulation study suggests that the proposed methodology works well for practical situations. The method is applied to an AIDS study.  相似文献   

16.
The magnitude-frequency distribution (MFD) of earthquake is a fundamental statistic in seismology. The so-called b-value in the MFD is of particular interest in geophysics. A continuous time hidden Markov model (HMM) is proposed for characterizing the variability of b-values. The HMM-based approach to modeling the MFD has some appealing properties over the widely used sliding-window approach. Often, large variability appears in the estimation of b-value due to window size tuning, which may cause difficulties in interpretation of b-value heterogeneities. Continuous-time hidden Markov models (CT-HMMs) are widely applied in various fields. It bears some advantages over its discrete time counterpart in that it can characterize heterogeneities appearing in time series in a finer time scale, particularly for highly irregularly-spaced time series, such as earthquake occurrences. We demonstrate an expectation–maximization algorithm for the estimation of general exponential family CT-HMM. In parallel with discrete-time hidden Markov models, we develop a continuous time version of Viterbi algorithm to retrieve the overall optimal path of the latent Markov chain. The methods are applied to New Zealand deep earthquakes. Before the analysis, we first assess the completeness of catalogue events to assure the analysis is not biased by missing data. The estimation of b-value is stable over the selection of magnitude thresholds, which is ideal for the interpretation of b-value variability.  相似文献   

17.
A general class of mixed Poisson regression models is introduced. This class is based on a mixing between the Poisson distribution and a distribution belonging to the exponential family. With this, we unified some overdispersed models which have been studied separately, such as negative binomial and Poisson inverse gaussian models. We consider a regression structure for both the mean and dispersion parameters of the mixed Poisson models, thus extending, and in some cases correcting, some previous models considered in the literature. An expectation–maximization (EM) algorithm is proposed for estimation of the parameters and some diagnostic measures, based on the EM algorithm, are considered. We also obtain an explicit expression for the observed information matrix. An empirical illustration is presented in order to show the performance of our class of mixed Poisson models. This paper contains a Supplementary Material.  相似文献   

18.
We consider the hierarchical Bayesian models of change-point problem in a sequence of random variables having either normal population or skew-normal population. Further, we consider the problem of detecting an influential point concerning change point using Bayes factors. Our proposed models are illustrated with the real data example, the annual flow volume data of Nile River at Aswan from 1871 to 1970. The result using our proposed models indicated the largest influential observation in the year 1888 among outliers. We have shown that it is useful to measure the influence of observations on Bayes factors. Here, we consider omitting single observation as well.  相似文献   

19.
In the optimal experimental design literature, the G-optimality is defined as minimizing the maximum prediction variance over the entire experimental design space. Although the G-optimality is a highly desirable property in many applications, there are few computer algorithms developed for constructing G-optimal designs. Some existing methods employ an exhaustive search over all candidate designs, which is time-consuming and inefficient. In this paper, a new algorithm for constructing G-optimal experimental designs is developed for both linear and generalized linear models. The new algorithm is made based on the clustering of candidate or evaluation points over the design space and it is a combination of point exchange algorithm and coordinate exchange algorithm. In addition, a robust design algorithm is proposed for generalized linear models with modification of an existing method. The proposed algorithm are compared with the methods proposed by Rodriguez et al. [Generating and assessing exact G-optimal designs. J. Qual. Technol. 2010;42(1):3–20] and Borkowski [Using a genetic algorithm to generate small exact response surface designs. J. Prob. Stat. Sci. 2003;1(1):65–88] for linear models and with the simulated annealing method and the genetic algorithm for generalized linear models through several examples in terms of the G-efficiency and computation time. The result shows that the proposed algorithm can obtain a design with higher G-efficiency in a much shorter time. Moreover, the computation time of the proposed algorithm only increases polynomially when the size of model increases.  相似文献   

20.
In this paper, we study the estimation and inference for a class of semiparametric mixtures of partially linear models. We prove that the proposed models are identifiable under mild conditions, and then give a PL–EM algorithm estimation procedure based on profile likelihood. The asymptotic properties for the resulting estimators and the ascent property of the PL–EM algorithm are investigated. Furthermore, we develop a test statistic for testing whether the non parametric component has a linear structure. Monte Carlo simulations and a real data application highlight the interest of the proposed procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号