首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Seasonal fractional ARIMA (ARFISMA) model with infinite variance innovations is used in the analysis of seasonal long-memory time series with large fluctuations (heavy-tailed distributions). Two methods, which are the empirical characteristic function (ECF) procedure developed by Knight and Yu [The empirical characteristic function in time series estimation. Econometric Theory. 2002;18:691–721] and the Two-Step method (TSM) are proposed to estimate the parameters of stable ARFISMA model. The ECF method estimates simultaneously all the parameters, while the TSM considers in the first step the Markov Chains Monte Carlo–Whittle approach introduced by Ndongo et al. [Estimation of long-memory parameters for seasonal fractional ARIMA with stable innovations. Stat Methodol. 2010;7:141–151], combined with the maximum likelihood estimation method developed by Alvarez and Olivares [Méthodes d'estimation pour des lois stables avec des applications en finance. Journal de la Société Française de Statistique. 2005;1(4):23–54] in the second step. Monte Carlo simulations are also used to evaluate the finite sample performance of these estimation techniques.  相似文献   

2.
We consider a k-GARMA generalization of the long-memory stochastic volatility model, discuss the properties of the model and propose a wavelet-based Whittle estimator for its parameters. Its consistency is shown. Monte Carlo experiments show that the small sample properties are essentially indistinguishable from those of the Whittle estimator, but are favorable with respect to a wavelet-based approximate maximum likelihood estimator. An application is given for the Microsoft Corporation stock, modeling the intraday seasonal patterns of its realized volatility.  相似文献   

3.
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all the estimators are fairly robust to conditionally heteroscedastic errors, (3) the local polynomial Whittle and bias-reduced log-periodogram regression estimators are shown to be more robust to short-run dynamics than other semiparametric (frequency domain and wavelet) estimators and in some cases even outperform the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.  相似文献   

4.
ABSTRACT

In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all the estimators are fairly robust to conditionally heteroscedastic errors, (3) the local polynomial Whittle and bias-reduced log-periodogram regression estimators are shown to be more robust to short-run dynamics than other semiparametric (frequency domain and wavelet) estimators and in some cases even outperform the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.  相似文献   

5.
This paper considers a semiparametric estimation of the memory parameter in a cyclical long-memory time series, which exhibits a strong dependence on cyclical behaviour, using the Whittle likelihood based on generalised exponential (GEXP) models. The proposed estimation is included in the so-called broadband or global method and uses information from the spectral density at all frequencies. We establish the consistency and the asymptotic normality of the estimated memory parameter for a linear process and thus do not require Gaussianity. A simulation study conducted using Monte Carlo experiments shows that the proposed estimation works well compared to other existing semiparametric estimations. Moreover, we provide an empirical application of the proposed estimation, applying it to the growth rate of Japan's industrial production index and detecting its cyclical persistence.  相似文献   

6.
Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305–320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom.  相似文献   

7.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

8.
There has recently been growing interest in modeling and estimating alternative continuous time multivariate stochastic volatility models. We propose a continuous time fractionally integrated Wishart stochastic volatility (FIWSV) process, and derive the conditional Laplace transform of the FIWSV model in order to obtain a closed form expression of moments. A two-step procedure is used, namely estimating the parameter of fractional integration via the local Whittle estimator in the first step, and estimating the remaining parameters via the generalized method of moments in the second step. Monte Carlo results for the procedure show a reasonable performance in finite samples. The empirical results for the S&P 500 and FTSE 100 indexes show that the data favor the new FIWSV process rather than the one-factor and two-factor models of the Wishart autoregressive process for the covariance structure.  相似文献   

9.
We present a versatile Monte Carlo method for estimating multidimensional integrals, with applications to rare-event probability estimation. The method fuses two distinct and popular Monte Carlo simulation methods—Markov chain Monte Carlo and importance sampling—into a single algorithm. We show that for some applied numerical examples the proposed Markov Chain importance sampling algorithm performs better than methods based solely on importance sampling or MCMC.  相似文献   

10.
In the expectation–maximization (EM) algorithm for maximum likelihood estimation from incomplete data, Markov chain Monte Carlo (MCMC) methods have been used in change-point inference for a long time when the expectation step is intractable. However, the conventional MCMC algorithms tend to get trapped in local mode in simulating from the posterior distribution of change points. To overcome this problem, in this paper we propose a stochastic approximation Monte Carlo version of EM (SAMCEM), which is a combination of adaptive Markov chain Monte Carlo and EM utilizing a maximum likelihood method. SAMCEM is compared with the stochastic approximation version of EM and reversible jump Markov chain Monte Carlo version of EM on simulated and real datasets. The numerical results indicate that SAMCEM can outperform among the three methods by producing much more accurate parameter estimates and the ability to achieve change-point positions and estimates simultaneously.  相似文献   

11.
We present a simulation method which is based on discretization of the state space of the target distribution (or some of its components) followed by proper weighting of the simulated output. The method can be used in order to simplify certain Monte Carlo and Markov chain Monte Carlo algorithms. Its main advantage is that the autocorrelations of the weighted output almost vanish and therefore standard methods for iid samples can be used for estimating the Monte Carlo standard errors. We illustrate the method via toy examples as well as the well-known dugongs and Challenger datasets.  相似文献   

12.
This article suggests Monte Carlo multiple test procedures which are provably valid in finite samples. These include combination methods originally proposed for independent statistics and further improvements which formalize statistical practice. We also adopt the Monte Carlo test method to noncontinuous combined statistics. The methods suggested are applied to test serial dependence and predictability. In particular, we introduce and analyze new procedures that account for endogenous lag selection. A simulation study illustrates the properties of the proposed methods. Results show that concrete and nonspurious power gains (over standard combination methods) can be achieved through the combined Monte Carlo test approach, and confirm arguments in favor of variance-ratio type criteria.  相似文献   

13.
Monte Carlo methods for the exact inference have received much attention recently in complete or incomplete contingency table analysis. However, conventional Markov chain Monte Carlo, such as the Metropolis–Hastings algorithm, and importance sampling methods sometimes generate the poor performance by failing to produce valid tables. In this paper, we apply an adaptive Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm (SAMC; Liang, Liu, & Carroll, 2007), to the exact test of the goodness-of-fit of the model in complete or incomplete contingency tables containing some structural zero cells. The numerical results are in favor of our method in terms of quality of estimates.  相似文献   

14.
In this article, we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC), and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets and simulation studies provide evidence that the extra analytical work involved in Hamiltonian Monte Carlo algorithms is compensated by a more efficient exploration of the parameter space.  相似文献   

15.
This article extends the concept of using the steady state ranked simulated sampling approach (SRSIS) by Al-Saleh and Samawi (2000) for improving Monte Carlo methods for single integration problem to multiple integration problems. We demonstrate that this approach provides unbiased estimators and substantially improves the performance of some Monte Carlo methods for bivariate integral approximations, which can be extended to multiple integrals’ approximations. This results in a significant reduction in costs and time required to attain a certain level of accuracy. In order to compare the performance of our method with the Samawi and Al-Saleh (2007) method, we use the same two illustrations for the bivariate case.  相似文献   

16.
Summary.  The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a data-driven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM's ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of user-defined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as 'ascent-based MCEM'. We apply ascent-based MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.  相似文献   

17.
A global sensitivity analysis of complex computer codes is usually performed by calculating the Sobol indices. The indices are estimated using Monte Carlo methods. The Monte Carlo simulations are time-consuming even if the computer response is replaced by a metamodel. This paper proposes a new method for calculating sensitivity indices that overcomes the Monte Carlo estimation. The method assumes a discretization of the domain of simulation and uses the expansion of the computer response on an orthogonal basis of complex functions to built a metamodel. This metamodel is then used to derive an analytical estimation of the Sobol indices. This approach is successfully tested on analytical functions and is compared with two alternative methods.  相似文献   

18.
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets.  相似文献   

19.
Park  Joonha  Atchadé  Yves 《Statistics and Computing》2020,30(5):1325-1345

We explore a general framework in Markov chain Monte Carlo (MCMC) sampling where sequential proposals are tried as a candidate for the next state of the Markov chain. This sequential-proposal framework can be applied to various existing MCMC methods, including Metropolis–Hastings algorithms using random proposals and methods that use deterministic proposals such as Hamiltonian Monte Carlo (HMC) or the bouncy particle sampler. Sequential-proposal MCMC methods construct the same Markov chains as those constructed by the delayed rejection method under certain circumstances. In the context of HMC, the sequential-proposal approach has been proposed as extra chance generalized hybrid Monte Carlo (XCGHMC). We develop two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS). The numerical efficiency of these new methods compare favorably to the NUTS. We additionally show that the sequential-proposal bouncy particle sampler enables the constructed Markov chain to pass through regions of low target density and thus facilitates better mixing of the chain when the target density is multimodal.

  相似文献   

20.
Pricing options is an important problem in financial engineering. In many scenarios of practical interest, financial option prices associated with an underlying asset reduces to computing an expectation w.r.t. a diffusion process. In general, these expectations cannot be calculated analytically, and one way to approximate these quantities is via the Monte Carlo (MC) method; MC methods have been used to price options since at least the 1970s. It has been seen in Del Moral P, Shevchenko PV. [Valuation of barrier options using sequential Monte Carlo. 2014. arXiv preprint] and Jasra A, Del Moral P. [Sequential Monte Carlo methods for option pricing. Stoch Anal Appl. 2011;29:292–316] that Sequential Monte Carlo (SMC) methods are a natural tool to apply in this context and can vastly improve over standard MC. In this article, in a similar spirit to Del Moral and Shevchenko (2014) and Jasra and Del Moral (2011), we show that one can achieve significant gains by using SMC methods by constructing a sequence of artificial target densities over time. In particular, we approximate the optimal importance sampling distribution in the SMC algorithm by using a sequence of weighting functions. This is demonstrated on two examples, barrier options and target accrual redemption notes (TARNs). We also provide a proof of unbiasedness of our SMC estimate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号