首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
A common financial trading strategy involves exploiting mean-reverting behaviour of paired asset prices. Since a unit root test can be used to determine which pairs of assets appear to exhibit mean-reverting behaviour, we propose a new Bayesian unit root to detect the presence of a local unit root vs. mean-reverting nonlinear smooth transition heteroskedastic alternative hypotheses. This test procedure is based on the posterior odds. For simultaneous estimation and inference, we employ an adaptive Bayesian Markov chain Monte Carlo scheme, which utilizes a mixture prior specification to solve the likelihood identification problem of the smoothing parameter and the autoregressive coefficient with a unit root. The size and power properties of the proposed method are examined via a simulation study. An empirical study examines the mean-reverting behaviour of price differential between stock and future.  相似文献   

2.
Inference, quantile forecasting and model comparison for an asymmetric double smooth transition heteroskedastic model is investigated. A Bayesian framework in employed and an adaptive Markov chain Monte Carlo scheme is designed. A mixture prior is proposed that alleviates the usual identifiability problem as the speed of transition parameter tends to zero, and an informative prior for this parameter is suggested, that allows for reliable inference and a proper posterior, despite the non-integrability of the likelihood function. A formal Bayesian posterior model comparison procedure is employed to compare the proposed model with its two limiting cases: the double threshold GARCH and symmetric ARX GARCH models. The proposed methods are illustrated using both simulated and international stock market return series. Some illustrations of the advantages of an adaptive sampling scheme for these models are also provided. Finally, Bayesian forecasting methods are employed in a Value-at-Risk study of the international return series. The results generally favour the proposed smooth transition model and highlight explosive and smooth nonlinear behaviour in financial markets.  相似文献   

3.
This paper presents a strategy for conducting Bayesian inference in the triangular cointegration model. A Jeffreys prior is used to circumvent an identification problem in the parameter region in which there is a near lack of cointegration. Sampling experiments are used to compare the repeated sampling performance of the approach with alternative classical cointegration methods. The Bayesian procedure is applied to testing for substitution between private and public consumption for a range of countries, with posterior estimates produced via Markov Chain Monte Carlo simulators.  相似文献   

4.
The article details a sampling scheme which can lead to a reduction in sample size and cost in clinical and epidemiological studies of association between a count outcome and risk factor. We show that inference in two common generalized linear models for count data, Poisson and negative binomial regression, is improved by using a ranked auxiliary covariate, which guides the sampling procedure. This type of sampling has typically been used to improve inference on a population mean. The novelty of the current work is its extension to log-linear models and derivations showing that the sampling technique results in an increase in information as compared to simple random sampling. Specifically, we show that under the proposed sampling strategy the maximum likelihood estimate of the risk factor’s coefficient is improved through an increase in the Fisher’s information. A simulation study is performed to compare the mean squared error, bias, variance, and power of the sampling routine with simple random sampling under various data-generating scenarios. We also illustrate the merits of the sampling scheme on a real data set from a clinical setting of males with chronic obstructive pulmonary disease. Empirical results from the simulation study and data analysis coincide with the theoretical derivations, suggesting that a significant reduction in sample size, and hence study cost, can be realized while achieving the same precision as a simple random sample.  相似文献   

5.
We describe a novel deterministic approximate inference technique for conditionally Gaussian state space models, i.e. state space models where the latent state consists of both multinomial and Gaussian distributed variables. The method can be interpreted as a smoothing pass and iteration scheme symmetric to an assumed density filter. It improves upon previously proposed smoothing passes by not making more approximations than implied by the projection onto the chosen parametric form, the assumed density. Experimental results show that the novel scheme outperforms these alternative deterministic smoothing passes. Comparisons with sampling methods suggest that the performance does not degrade with longer sequences.  相似文献   

6.
Abstract. We study the coverage properties of Bayesian confidence intervals for the smooth component functions of generalized additive models (GAMs) represented using any penalized regression spline approach. The intervals are the usual generalization of the intervals first proposed by Wahba and Silverman in 1983 and 1985, respectively, to the GAM component context. We present simulation evidence showing these intervals have close to nominal ‘across‐the‐function’ frequentist coverage probabilities, except when the truth is close to a straight line/plane function. We extend the argument introduced by Nychka in 1988 for univariate smoothing splines to explain these results. The theoretical argument suggests that close to nominal coverage probabilities can be achieved, provided that heavy oversmoothing is avoided, so that the bias is not too large a proportion of the sampling variability. The theoretical results allow us to derive alternative intervals from a purely frequentist point of view, and to explain the impact that the neglect of smoothing parameter variability has on confidence interval performance. They also suggest switching the target of inference for component‐wise intervals away from smooth components in the space of the GAM identifiability constraints.  相似文献   

7.
In nonparametric regression the smoothing parameter can be selected by minimizing a Mean Squared Error (MSE) based criterion. For spline smoothing one can also rewrite the smooth estimation as a Linear Mixed Model where the smoothing parameter appears as the a priori variance of spline basis coefficients. This allows to employ Maximum Likelihood (ML) theory to estimate the smoothing parameter as variance component. In this paper the relation between the two approaches is illuminated for penalized spline smoothing (P-spline) as suggested in Eilers and Marx Statist. Sci. 11(2) (1996) 89. Theoretical and empirical arguments are given showing that the ML approach is biased towards undersmoothing, i.e. it chooses a too complex model compared to the MSE. The result is in line with classical spline smoothing, even though the asymptotic arguments are different. This is because in P-spline smoothing a finite dimensional basis is employed while in classical spline smoothing the basis grows with the sample size.  相似文献   

8.
Abstract. This paper proposes, implements and investigates a new non‐parametric two‐sample test for detecting stochastic dominance. We pose the question of detecting the stochastic dominance in a non‐standard way. This is motivated by existing evidence showing that standard formulations and pertaining procedures may lead to serious errors in inference. The procedure that we introduce matches testing and model selection. More precisely, we reparametrize the testing problem in terms of Fourier coefficients of well‐known comparison densities. Next, the estimated Fourier coefficients are used to form a kind of signed smooth rank statistic. In such a setting, the number of Fourier coefficients incorporated into the statistic is a smoothing parameter. We determine this parameter via some flexible selection rule. We establish the asymptotic properties of the new test under null and alternative hypotheses. The finite sample performance of the new solution is demonstrated through Monte Carlo studies and an application to a set of survival times.  相似文献   

9.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

10.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

11.
ABSTRACT

In the current study we develop the robust Bayesian inference for the generalized inverted family of distributions (GIFD) under an ε-contamination class of prior distributions for the shape parameter α, with different possibilities of known and unknown scale parameter. We used Type II censoring and Bartholomew sampling scheme (1963) for the following derivations under the squared-error loss function (SELF) and linear exponential (LINEX) loss function : ML-II Bayes estimators of the i) parameters; ii) Reliability function and; iii) Hazard function. We also present simulation study and analysis of a real data set.  相似文献   

12.
Approximate Bayesian computation (ABC) is an approach to sampling from an approximate posterior distribution in the presence of a computationally intractable likelihood function. A common implementation is based on simulating model, parameter and dataset triples from the prior, and then accepting as samples from the approximate posterior, those model and parameter pairs for which the corresponding dataset, or a summary of that dataset, is ‘close’ to the observed data. Closeness is typically determined though a distance measure and a kernel scale parameter. Appropriate choice of that parameter is important in producing a good quality approximation. This paper proposes diagnostic tools for the choice of the kernel scale parameter based on assessing the coverage property, which asserts that credible intervals have the correct coverage levels in appropriately designed simulation settings. We provide theoretical results on coverage for both model and parameter inference, and adapt these into diagnostics for the ABC context. We re‐analyse a study on human demographic history to determine whether the adopted posterior approximation was appropriate. Code implementing the proposed methodology is freely available in the R package abctools .  相似文献   

13.
This article reviews Bayesian inference from the perspective that the designated model is misspecified. This misspecification has implications in interpretation of objects, such as the prior distribution, which has been the cause of recent questioning of the appropriateness of Bayesian inference in this scenario. The main focus of this article is to establish the suitability of applying the Bayes update to a misspecified model, and relies on representation theorems for sequences of symmetric distributions; the identification of parameter values of interest; and the construction of sequences of distributions which act as the guesses as to where the next observation is coming from. A conclusion is that a clear identification of the fundamental starting point for the Bayesian is described.  相似文献   

14.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

15.
Bayesian inference under the skew-normal family of distributions is discussed using an arbitrary proper prior for the skewness parameter. In particular, we review some results when a skew-normal prior distribution is considered. Considering this particular prior, we provide a stochastic representation of the posterior of the skewness parameter. Moreover, we obtain analytical expressions for the posterior mean and variance of the skewness parameter. The ultimate goal is to consider these results to one change point identification in the parameters of the location-scale skew-normal model. Some Latin American emerging market datasets are used to illustrate the methodology developed in this work.  相似文献   

16.
We consider an extension of the recursive bivariate probit model for estimating the effect of a binary variable on a binary outcome in the presence of unobserved confounders, nonlinear covariate effects and overdispersion. Specifically, the model consists of a system of two binary outcomes with a binary endogenous regressor which includes smooth functions of covariates, hence allowing for flexible functional dependence of the responses on the continuous regressors, and arbitrary random intercepts to deal with overdispersion arising from correlated observations on clusters or from the omission of non‐confounding covariates. We fit the model by maximizing a penalized likelihood using an Expectation‐Maximisation algorithm. The issues of automatic multiple smoothing parameter selection and inference are also addressed. The empirical properties of the proposed algorithm are examined in a simulation study. The method is then illustrated using data from a survey on health, aging and wealth.  相似文献   

17.
This article considers inference for the log-normal distribution based on progressive Type I interval censored data by both frequentist and Bayesian methods. First, the maximum likelihood estimates (MLEs) of the unknown model parameters are computed by expectation-maximization (EM) algorithm. The asymptotic standard errors (ASEs) of the MLEs are obtained by applying the missing information principle. Next, the Bayes’ estimates of the model parameters are obtained by Gibbs sampling method under both symmetric and asymmetric loss functions. The Gibbs sampling scheme is facilitated by adopting a similar data augmentation scheme as in EM algorithm. The performance of the MLEs and various Bayesian point estimates is judged via a simulation study. A real dataset is analyzed for the purpose of illustration.  相似文献   

18.
The Yule–Simon distribution has been out of the radar of the Bayesian community, so far. In this note, we propose an explicit Gibbs sampling scheme when a Gamma prior is chosen for the shape parameter. The performance of the algorithm is illustrated with simulation studies, including count data regression, and a real data application to text analysis. We compare our proposal to the frequentist counterparts showing better performance of our algorithm when a small sample size is considered.  相似文献   

19.
This paper is motivated by the pioneering work of Emanuel Parzen wherein he advanced the estimation of (spectral) densities via kernel smoothing and established the role of reproducing kernel Hilbert spaces (RKHS) in field of time series analysis. Here, we consider analysis of power (ANOPOW) for replicated time series collected in an experimental design where the main goals are to estimate, and to detect differences among, group spectra. To accomplish these goals, we obtain smooth estimators of the group spectra by assuming that each spectral density is in some RKHS; we then apply penalized least squares in a smoothing spline ANOPOW. For inference, we obtain simultaneous confidence intervals for the estimated group spectra via bootstrapping.  相似文献   

20.
Summary.  Smoothing splines via the penalized least squares method provide versatile and effective nonparametric models for regression with Gaussian responses. The computation of smoothing splines is generally of the order O ( n 3), n being the sample size, which severely limits its practical applicability. We study more scalable computation of smoothing spline regression via certain low dimensional approximations that are asymptotically as efficient. A simple algorithm is presented and the Bayes model that is associated with the approximations is derived, with the latter guiding the porting of Bayesian confidence intervals. The practical choice of the dimension of the approximating space is determined through simulation studies, and empirical comparisons of the approximations with the exact solution are presented. Also evaluated is a simple modification of the generalized cross-validation method for smoothing parameter selection, which to a large extent fixes the occasional undersmoothing problem that is suffered by generalized cross-validation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号