首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are many approaches in the estimation of spectral density. With regard to parametric approaches, different divergences are proposed in fitting a certain parametric family of spectral densities. Moreover, nonparametric approaches are also quite common considering the situation when we cannot specify the model of process. In this paper, we develop a local Whittle likelihood approach based on a general score function, with some special cases of which, the approach applies to more applications. This paper highlights the effective asymptotics of our general local Whittle estimator, and presents a comparison with other estimators. Additionally, for a special case, we construct the one-step ahead predictor based on the form of the score function. Subsequently, we show that it has a smaller prediction error than the classical exponentially weighted linear predictor. The provided numerical studies show some interesting features of our local Whittle estimator.  相似文献   

2.
This paper studies the estimation of correlation coefficient between unobserved variables of interest. These unobservable variables are distorted in a additive fashion by an observed confounding variable. Two estimators, a direct-plug-in estimator and a residual-based estimator, are proposed. Their asymptotical results are obtained, and the residual-based estimator is shown asymptotically efficient. Moreover, we suggest an asymptotic normal approximation and an empirical likelihood-based statistic to construct the confidence interval. The empirical likelihood statistic is shown to be asymptotically chi-squared. Simulation studies are conducted to examine the performance of the proposed estimators. These methods are applied to analyse the Boston housing price data for an illustration.  相似文献   

3.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

4.
We develop the empirical likelihood approach for a class of vector‐valued, not necessarily Gaussian, stationary processes with unknown parameters. In time series analysis, it is known that the Whittle likelihood is one of the most fundamental tools with which to obtain a good estimator of unknown parameters, and that the score functions are asymptotically normal. Motivated by the Whittle likelihood, we apply the empirical likelihood approach to its derivative with respect to unknown parameters. We also consider the empirical likelihood approach to minimum contrast estimation based on a spectral disparity measure, and apply the approach to the derivative of the spectral disparity. This paper provides rigorous proofs on the convergence of our two empirical likelihood ratio statistics to sums of gamma distributions. Because the fitted spectral model may be different from the true spectral structure, the results enable us to construct confidence regions for various important time series parameters without assuming specified spectral structures and the Gaussianity of the process.  相似文献   

5.
Pao-sheng Shen 《Statistics》2013,47(2):315-326
In this article, we consider nonparametric estimation of the survival function when the data are subject to left-truncation and right-censoring and the sample size before truncation is known. We propose two estimators. The first estimator is derived based on a self-consistent estimating equation. The second estimator is obtained by using the constrained expectation-maximization algorithm. Simulation results indicate that both estimators are more efficient than the product-limit estimator. When there is no censoring, the performance of the proposed estimators is compared with that of the estimator proposed by Li and Qin [Semiparametric likelihood-based inference for biased and truncated data when total sample size is known, J. R. Stat. Soc. B 60 (1998), pp. 243–254] via simulation study.  相似文献   

6.
Empirical Likelihood-based Inference in Linear Models with Missing Data   总被引:18,自引:0,他引:18  
The missing response problem in linear regression is studied. An adjusted empirical likelihood approach to inference on the mean of the response variable is developed. A non-parametric version of Wilks's theorem for the adjusted empirical likelihood is proved, and the corresponding empirical likelihood confidence interval for the mean is constructed. With auxiliary information, an empirical likelihood-based estimator with asymptotic normality is defined and an adjusted empirical log-likelihood function with asymptotic χ2 is derived. A simulation study is conducted to compare the adjusted empirical likelihood methods and the normal approximation methods in terms of coverage accuracies and average lengths of the confidence intervals. Based on biases and standard errors, a comparison is also made between the empirical likelihood-based estimator and related estimators by simulation. Our simulation indicates that the adjusted empirical likelihood methods perform competitively and the use of auxiliary information provides improved inferences.  相似文献   

7.
We analyze by simulation the properties of two time domain and two frequency domain estimators for low-order autoregressive fractionally integrated moving-average Gaussian models, ARFIMA (p,d,q). The estimators considered are the exact maximum likelihood for demeaned data (EML) the associated modified profile likelihood (MPL) and the Whittle estimator with (WLT) and without tapered data (WL). Length of the series is 100. The estimators are compared in terms of pile-up effect, mean square error, bias, and empirical confidence level. The tapered version of the Whittle likelihood turns out to be a reliable estimator for ARMA and ARFIMA models. Its small losses in performance in case of ‘well-behaved’ models are compensated sufficiently in more ‘difficult’ models. The modified profile likelihood is an alternative to the WLT but is computationally more demanding. It is either equivalent to the EML or more favorable than the EML. For fractionally integrated models, particularly, it dominates clearly the EML. The WL has serious deficiencies for large ranges of parameters, and so cannot be recommended in general. The EML, on the other hand, should only be used with care for fractionally integrated models due to its potential large negative bias of the fractional integration parameter. In general, one should proceed with caution for ARMA(1,1) models with almost canceling roots, and, in particular, in case of the EML and the MPL for inference in the vicinity of a moving-average root of +1.  相似文献   

8.
A frequency domain bootstrap (FDB) is a common technique to apply Efron’s independent and identically distributed resampling technique (Efron, 1979) to periodogram ordinates – especially normalized periodogram ordinates – by using spectral density estimates. The FDB method is applicable to several classes of statistics, such as estimators of the normalized spectral mean, the autocorrelation (but not autocovariance), the normalized spectral density function, and Whittle parameters. While this FDB method has been extensively studied with respect to short-range dependent time processes, there is a dearth of research on its use with long-range dependent time processes. Therefore, we propose an FDB methodology for ratio statistics under long-range dependence, using semi- and nonparametric spectral density estimates as a normalizing factor. It is shown that the FDB approximation allows for valid distribution estimation for a broad class of stationary, long-range (or short-range) dependent linear processes, without any stringent assumptions on the distribution of the underlying process. The results of a large simulation study show that the FDB approximation using a semi- or nonparametric spectral density estimator is often robust for various values of a long-memory parameter reflecting magnitude of dependence. We apply the proposed procedure to two data examples.  相似文献   

9.
We consider the Whittle likelihood estimation of seasonal autoregressive fractionally integrated moving‐average models in the presence of an additional measurement error and show that the spectral maximum Whittle likelihood estimator is asymptotically normal. We illustrate by simulation that ignoring measurement errors may result in incorrect inference. Hence, it is pertinent to test for the presence of measurement errors, which we do by developing a likelihood ratio (LR) test within the framework of Whittle likelihood. We derive the non‐standard asymptotic null distribution of this LR test and the limiting distribution of LR test under a sequence of local alternatives. Because in practice, we do not know the order of the seasonal autoregressive fractionally integrated moving‐average model, we consider three modifications of the LR test that takes model uncertainty into account. We study the finite sample properties of the size and the power of the LR test and its modifications. The efficacy of the proposed approach is illustrated by a real‐life example.  相似文献   

10.
The geographical relative risk function is a useful tool for investigating the spatial distribution of disease based on case and control data. The most common way of estimating this function is using the ratio of bivariate kernel density estimates constructed from the locations of cases and controls, respectively. An alternative is to use a local-linear (LL) estimator of the log-relative risk function. In both cases, the choice of bandwidth is critical. In this article, we examine the relative performance of the two estimation techniques using a variety of data-driven bandwidth selection methods, including likelihood cross-validation (CV), least-squares CV, rule-of-thumb reference methods, and a new approximate plug-in (PI) bandwidth for the LL estimator. Our analysis includes the comparison of asymptotic results; a simulation study; and application of the estimators on two real data sets. Our findings suggest that the density ratio method implemented with the least-squares CV bandwidth selector is generally best, with the LL estimator with PI bandwidth being competitive in applications with strong large-scale trends but much worse in situations with elliptical clusters.  相似文献   

11.
This paper considers the problem of selecting optimal bandwidths for variable (sample‐point adaptive) kernel density estimation. A data‐driven variable bandwidth selector is proposed, based on the idea of approximating the log‐bandwidth function by a cubic spline. This cubic spline is optimized with respect to a cross‐validation criterion. The proposed method can be interpreted as a selector for either integrated squared error (ISE) or mean integrated squared error (MISE) optimal bandwidths. This leads to reflection upon some of the differences between ISE and MISE as error criteria for variable kernel estimation. Results from simulation studies indicate that the proposed method outperforms a fixed kernel estimator (in terms of ISE) when the target density has a combination of sharp modes and regions of smooth undulation. Moreover, some detailed data analyses suggest that the gains in ISE may understate the improvements in visual appeal obtained using the proposed variable kernel estimator. These numerical studies also show that the proposed estimator outperforms existing variable kernel density estimators implemented using piecewise constant bandwidth functions.  相似文献   

12.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

13.
Yu-Ye Zou 《Statistics》2017,51(6):1214-1237
In this paper, we define the nonlinear wavelet estimator of density for the right censoring model with the censoring indicator missing at random (MAR), and develop its asymptotic expression for mean integrated squared error (MISE). Unlike for kernel estimator, the MISE expression of the estimator is not affected by the presence of discontinuities in the curve. Meanwhile, asymptotic normality of the estimator is established. The proposed estimator can reduce to the estimator defined by Li [Non-linear wavelet-based density estimators under random censorship. J Statist Plann Inference. 2003;117(1):35–58] when the censoring indicator MAR does not occur and a bandwidth in non-parametric estimation is close to zero. Also, we define another two nonlinear wavelet estimators of the density. A simulation is done to show the performance of the three proposed estimators.  相似文献   

14.
In this article, we propose a nonparametric estimator for percentiles of the time-to-failure distribution obtained from a linear degradation model using the kernel density method. The properties of the proposed kernel estimator are investigated and compared with well-known maximum likelihood and ordinary least squares estimators via a simulation technique. The mean squared error and the length of the bootstrap confidence interval are used as the basis criteria of the comparisons. The simulation study shows that the performance of the kernel estimator is acceptable as a general estimator. When the distribution of the data is assumed to be known, the maximum likelihood and ordinary least squares estimators perform better than the kernel estimator, while the kernel estimator is superior when the assumption of our knowledge of the data distribution is violated. A comparison among different estimators is achieved using a real data set.  相似文献   

15.
If the power spectral density of a continuous time stationary stochastic process is not limited to a finite bandwidth, data sampled from that process at any uniform sampling rate leads to biased and inconsistent spectrum estimators, which are unsuitable for constructing confidence intervals. In this paper, we use the smoothed periodogram estimator to construct asymptotic confidence intervals shrinking to the true spectra, by allowing the sampling rate to go to infinity suitably fast as the sample size goes to infinity. The proposed method requires minimal computation, as it does not involve bootstrap or other resampling. The method is illustrated through a Monte-Carlo simulation study, and its performance is compared with that of the corresponding method based on uniform sampling at a fixed rate.  相似文献   

16.
This paper develops a likelihood-based inference procedure for continuous-time capture-recapture models. The first-capture and recapture intensities are assumed to be in constant proportion but may otherwise vary arbitrarily through time. The full likelihood is partitioned into two factors, one of which is analogous to the likelihood in a special type of multiplicative intensity model arising in failure time analysis. The remaining factor is free of the non-parametric nuisance parameter and is easily maximized. This factor provides an estimator of population size and an asymptotic variance under a counting process framework. The resulting estimation procedure is shown to be equivalent to that derived from a martingale-based estimating function approach. Simulation results are presented to examine the performance of the proposed estimators.  相似文献   

17.
The circulant embedding method for generating statistically exact simulations of time series from certain Gaussian distributed stationary processes is attractive because of its advantage in computational speed over a competitive method based upon the modified Cholesky decomposition. We demonstrate that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimators, all forms of Welch's overlapped segment averaging spectral estimator and all basic multitaper spectral estimators. One application for this technique is to generate time series for bootstrapping various statistics. When used with bootstrapping, our proposed technique avoids some – but not all – of the pitfalls of previously proposed frequency domain methods for simulating time series.  相似文献   

18.
A crucial problem in kernel density estimates of a probability density function is the selection of the bandwidth. The aim of this study is to propose a procedure for selecting both fixed and variable bandwidths. The present study also addresses the question of how different variable bandwidth kernel estimators perform in comparison with each other and to the fixed type of bandwidth estimators. The appropriate algorithms for implementation of the proposed method are given along with a numerical simulation.The numerical results serve as a guide to determine which bandwidth selection method is most appropriate for a given type of estimator over a vide class of probability density functions, Also, we obtain a numerical comparison of the different types of kernel estimators under various types of bandwidths.  相似文献   

19.
The semiparametric estimators of time varying long memory parameter are investigated for locally stationary long memory processes. The GPH estimator and the local Whittle estimator are considered. Under some mild regularity assumptions, the weak consistency and the asymptotic normality of the estimators are obtained. The finite sample performance of the estimators is discussed through a small simulation study.  相似文献   

20.
ABSTRACT. In this paper we consider logspline density estimation for data that may be left-truncated or right-censored. For randomly left-truncated and right-censored data the product-limit estimator is known to be a consistent estimator of the survivor function, having a faster rate of convergence than many density estimators. The product-limit estimator and B-splines are used to construct the logspline density estimate for possibly censored or truncated data. Rates of convergence are established when the log-density function is assumed to be in a Besov space. An algorithm involving a procedure similar to maximum likelihood, stepwise knot addition, and stepwise knot deletion is proposed for the estimation of the density function based upon sample data. Numerical examples are used to show the finite-sample performance of inference based on the logspline density estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号