首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 7 毫秒
1.
Summary.  Risk is at the centre of many policy decisions in companies, governments and other institutions. The risk of road fatalities concerns local governments in planning countermeasures, the risk and severity of counterparty default concerns bank risk managers daily and the risk of infection has actuarial and epidemiological consequences. However, risk cannot be observed directly and it usually varies over time. We introduce a general multivariate time series model for the analysis of risk based on latent processes for the exposure to an event, the risk of that event occurring and the severity of the event. Linear state space methods can be used for the statistical treatment of the model. The new framework is illustrated for time series of insurance claims, credit card purchases and road safety. It is shown that the general methodology can be effectively used in the assessment of risk.  相似文献   

2.
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling, such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data.  相似文献   

3.
The Kim filter (KF) approximation is widely used for the likelihood calculation of dynamic linear models with Markov regime-switching parameters. However, despite its popularity, its approximation error has not yet been examined rigorously. Therefore, this study investigates the reliability of the KF approximation for maximum likelihood (ML) and Bayesian estimations. To measure the approximation error, we compare the outcomes of the KF method with those of the auxiliary particle filter (APF). The APF is a numerical method that requires a longer computing time, but its numerical error can be sufficiently minimized by increasing simulation size. According to our extensive simulation and empirical studies, the likelihood values obtained from the KF approximation are practically identical to those of the APF. Furthermore, we show that the KF method is reliable, particularly when regimes are persistent and sample size is small. From the Bayesian perspective, we show that the KF method improves the efficiency of posterior simulation. This study contributes to the literature by providing evidence to justify the use of the KF method in both ML and Bayesian estimations.  相似文献   

4.
This article takes a hierarchical model approach to the estimation of state space models with diffuse initial conditions. An initial state is said to be diffuse when it cannot be assigned a proper prior distribution. In state space models this occurs either when fixed effects are present or when modelling nonstationarity in the state transition equation. Whereas much of the literature views diffuse states as an initialization problem, we follow the approach of Sallas and Harville (1981,1988) and incorporate diffuse initial conditions via noninformative prior distributions into hierarchical linear models. We apply existing results to derive the restricted loglike-lihood and appropriate modifications to the standard Kalman filter and smoother. Our approach results in a better understanding of De Jong's (1991) contributions. This article also shows how to adjust the standard Kalman filter, the fixed inter- val smoother and the state space model forecasting recursions, together with their mean square errors, for he presence of diffuse components. Using a hierarchical model approach it is shown that the estimates obtained are Best Linear Unbiased Predictors (BLUP).  相似文献   

5.
As pointed out in a recent paper by Amirkhalkhali and Rao (1986) (henceforth referred to as A&R), the usual assumption of normality for the error terms of a regression model isoften untenable. However, when this assumption is dropped, it may be difficult to characterize parameter estimates for the model. For example, A&R (p. 189) state that “if the regression errors are non-normal, we are not even sure of their [e.g., the generalized least squares parameter estimates1] asymptotic properties.” A partial answer, however, is given by Spall and Wall (1984), which presents an asymptotic distribution theory for Kalman filter estimates for cases where the random terms of the state space model are not necessarily Gaussian. Certain of these asymptotic distribution results are also discussed in Spall (1985) in the context of model validation (diagnostic checking)  相似文献   

6.
Long‐term historical daily temperatures are used in electricity forecasting to simulate the probability distribution of future demand but can be affected by changes in recording site and climate. This paper presents a method of adjusting for the effect of these changes on daily maximum and minimum temperatures. The adjustment technique accommodates the autocorrelated and bivariate nature of the temperature data which has not previously been taken into account. The data are from Perth, Western Australia, the main electricity demand centre for the South‐West of Western Australia. The statistical modelling involves a multivariate extension of the univariate time series ‘interleaving method’, which allows fully efficient simultaneous estimation of the parameters of replicated Vector Autoregressive Moving Average processes. Temperatures at the most recent weather recording location in Perth are shown to be significantly lower compared to previous sites. There is also evidence of long‐term heating due to climate change especially for minimum temperatures.  相似文献   

7.
Kalman filtering techniques are widely used by engineers to recursively estimate random signal parameters which are essentially coefficients in a large-scale time series regression model. These Bayesian estimators depend on the values assumed for the mean and covariance parameters associated with the initial state of the random signal. This paper considers a likelihood approach to estimation and tests of hypotheses involving the critical initial means and covariances. A computationally simple convergent iterative algorithm is used to generate estimators which depend only on standard Kalman filter outputs at each successive stage. Conditions are given under which the maximum likelihood estimators are consistent and asymptotically normal. The procedure is illustrated using a typical large-scale data set involving 10-dimensional signal vectors.  相似文献   

8.
Summary The paper deals with missing data and forecasting problems in multivariate time series making use of the Common Components Dynamic Linear Model (DLMCC), presented in Quintana (1985), and West and Harrison (1989). Some results are presented and discussed: exploiting the correlation between series, estimated by the DLMCC, the paper shows as it is possible to update state vector posterior distributions for the unobserved series. This is realized on the base of the updating of the observed series state vectors, for which the usual Kalman filter equations can be applied. An application concerning some Italian private consumption series provides an example of the model capabilities.  相似文献   

9.
Summary. A drawback of a new method for integrating abundance and mark–recapture–recovery data is the need to combine likelihoods describing the different data sets. Often these likelihoods will be formed by using specialist computer programs, which is an obstacle to the joint analysis. This difficulty is easily circumvented by the use of a multivariate normal approximation. We show that it is only necessary to make the approximation for the parameters of interest in the joint analysis. The approximation is evaluated on data sets for two bird species and is shown to be efficient and accurate.  相似文献   

10.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
The detection of (structural) breaks or the so called change point problem has drawn increasing attention from the theoretical, applied economic and financial fields. Much of the existing research concentrates on the detection of change points and asymptotic properties of their estimators in panels when N, the number of panels, as well as T, the number of observations in each panel are large. In this paper we pursue a different approach, i.e., we consider the asymptotic properties when N→∞ while keeping T fixed. This situation is typically related to large (firm-level) data containing financial information about an immense number of firms/stocks across a limited number of years/quarters/months. We propose a general approach for testing for break(s) in this setup. In particular, we obtain the asymptotic behavior of test statistics. We also propose a wild bootstrap procedure that could be used to generate the critical values of the test statistics. The theoretical approach is supplemented by numerous simulations and by an empirical illustration. We demonstrate that the testing procedure works well in the framework of the four factors CAPM model. In particular, we estimate the breaks in the monthly returns of US mutual funds during the period January 2006 to February 2010 which covers the subprime crises.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号