首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we introduce a new risk measure, the so‐called conditional tail moment. It is defined as the moment of order a ≥ 0 of the loss distribution above the upper α‐quantile where α ∈ (0,1). Estimating the conditional tail moment permits us to estimate all risk measures based on conditional moments such as conditional tail expectation, conditional value at risk or conditional tail variance. Here, we focus on the estimation of these risk measures in case of extreme losses (where α ↓0 is no longer fixed). It is moreover assumed that the loss distribution is heavy tailed and depends on a covariate. The estimation method thus combines non‐parametric kernel methods with extreme‐value statistics. The asymptotic distribution of the estimators is established, and their finite‐sample behaviour is illustrated both on simulated data and on a real data set of daily rainfalls.  相似文献   

2.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
In this paper, we extend the focused information criterion (FIC) to copula models. Copulas are often used for applications where the joint tail behavior of the variables is of particular interest, and selecting a copula that captures this well is then essential. Traditional model selection methods such as the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) aim at finding the overall best‐fitting model, which is not necessarily the one best suited for the application at hand. The FIC, on the other hand, evaluates and ranks candidate models based on the precision of their point estimates of a context‐given focus parameter. This could be any quantity of particular interest, for example, the mean, a correlation, conditional probabilities, or measures of tail dependence. We derive FIC formulae for the maximum likelihood estimator, the two‐stage maximum likelihood estimator, and the so‐called pseudo‐maximum‐likelihood (PML) estimator combined with parametric margins. Furthermore, we confirm the validity of the AIC formula for the PML estimator combined with parametric margins. To study the numerical behavior of FIC, we have carried out a simulation study, and we have also analyzed a multivariate data set pertaining to abalones. The results from the study show that the FIC successfully ranks candidate models in terms of their performance, defined as how well they estimate the focus parameter. In terms of estimation precision, FIC clearly outperforms AIC, especially when the focus parameter relates to only a specific part of the model, such as the conditional upper‐tail probability.  相似文献   

4.
Hierarchical models defined by means of directed, acyclic graphs are a powerful and widely used tool for Bayesian analysis of problems of varying degrees of complexity. A simulation‐based method for model criticism in such models has been suggested by O'Hagan in the form of a conflict measure based on contrasting separate local information sources about each node in the graph. This measure is however not well calibrated. In order to rectify this, alternative mutually similar tail probability‐based measures have been proposed independently and have been proved to be uniformly distributed under the assumed model in quite general normal models with known covariance matrices. In the present paper, we extend this result to a variety of models. An advantage of this is that computationally costly pre‐calibration schemes needed for some other suggested methods can be avoided. Another advantage is that non‐informative prior distributions can be used when performing model criticism.  相似文献   

5.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   

6.
Yuan Ying Zhao 《Statistics》2015,49(6):1348-1365
Various mixed models were developed to capture the features of between- and within-individual variation for longitudinal data under the normality assumption of the random effect and the within-individual random error. However, the normality assumption may be violated in some applications. To this end, this article assumes that the random effect follows a skew-normal distribution and the within-individual error is distributed as a reproductive dispersion model. An expectation conditional maximization (ECME) algorithm together with the Metropolis-Hastings (MH) algorithm within the Gibbs sampler is presented to simultaneously obtain estimates of parameters and random effects. Several diagnostic measures are developed to identify the potentially influential cases and assess the effect of minor perturbation to model assumptions via the case-deletion method and local influence analysis. To reduce the computational burden, we derive the first-order approximations to case-deletion diagnostics. Several simulation studies and a real data example are presented to illustrate the newly developed methodologies.  相似文献   

7.
The problem of making decisions about an unknown parameter is examined under a convex loss function, when its prior distribution may not be uniquely specified on the basis of the available information. Following the conditional ¡-minimax approach, an action is chosen such that it minimises the maximum posterior expected loss. The characterising properties of such an action, called a conditional ¡-minimax action, are found and illustrated in three examples.  相似文献   

8.
This paper examines local influence assessment in generalized autoregressive conditional heteroscesdasticity models with Gaussian and Student-t errors, where influence is examined via the likelihood displacement. The analysis of local influence is discussed under three perturbation schemes: data perturbation, innovative model perturbation and additive model perturbation. For each case, expressions for slope and curvature diagnostics are derived. Monte Carlo experiments are presented to determine the threshold values for locating influential observations. The empirical study of daily returns of the New York Stock Exchange composite index shows that local influence analysis is a useful technique for detecting influential observations; most of the observations detected as influential are associated with historical shocks in the market. Finally, based on this empirical study and the analysis of simulated data, some advice is given on how to use the discussed methodology.  相似文献   

9.
The authors consider Bayesian analysis for continuous‐time Markov chain models based on a conditional reference prior. For such models, inference of the elapsed time between chain observations depends heavily on the rate of decay of the prior as the elapsed time increases. Moreover, improper priors on the elapsed time may lead to improper posterior distributions. In addition, an infinitesimal rate matrix also characterizes this class of models. Experts often have good prior knowledge about the parameters of this matrix. The authors show that the use of a proper prior for the rate matrix parameters together with the conditional reference prior for the elapsed time yields a proper posterior distribution. The authors also demonstrate that, when compared to analyses based on priors previously proposed in the literature, a Bayesian analysis on the elapsed time based on the conditional reference prior possesses better frequentist properties. The type of prior thus represents a better default prior choice for estimation software.  相似文献   

10.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

11.
In this article, we highlight some interesting facts about Bayesian variable selection methods for linear regression models in settings where the design matrix exhibits strong collinearity. We first demonstrate via real data analysis and simulation studies that summaries of the posterior distribution based on marginal and joint distributions may give conflicting results for assessing the importance of strongly correlated covariates. The natural question is which one should be used in practice. The simulation studies suggest that posterior inclusion probabilities and Bayes factors that evaluate the importance of correlated covariates jointly are more appropriate, and some priors may be more adversely affected in such a setting. To obtain a better understanding behind the phenomenon, we study some toy examples with Zellner’s g-prior. The results show that strong collinearity may lead to a multimodal posterior distribution over models, in which joint summaries are more appropriate than marginal summaries. Thus, we recommend a routine examination of the correlation matrix and calculation of the joint inclusion probabilities for correlated covariates, in addition to marginal inclusion probabilities, for assessing the importance of covariates in Bayesian variable selection.  相似文献   

12.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

13.
Missing data are often problematic in social network analysis since what is missing may potentially alter the conclusions about what we have observed as tie-variables need to be interpreted in relation to their local neighbourhood and the global structure. Some ad hoc methods for dealing with missing data in social networks have been proposed but here we consider a model-based approach. We discuss various aspects of fitting exponential family random graph (or p-star) models (ERGMs) to networks with missing data and present a Bayesian data augmentation algorithm for the purpose of estimation. This involves drawing from the full conditional posterior distribution of the parameters, something which is made possible by recently developed algorithms. With ERGMs already having complicated interdependencies, it is particularly important to provide inference that adequately describes the uncertainty, something that the Bayesian approach provides. To the extent that we wish to explore the missing parts of the network, the posterior predictive distributions, immediately available at the termination of the algorithm, are at our disposal, which allows us to explore the distribution of what is missing unconditionally on any particular parameter values. Some important features of treating missing data and of the implementation of the algorithm are illustrated using a well-known collaboration network and a variety of missing data scenarios.  相似文献   

14.
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem.  相似文献   

15.
It is well known that parameter estimates and forecasts are sensitive to assumptions about the tail behavior of the error distribution. In this article, we develop an approach to sequential inference that also simultaneously estimates the tail of the accompanying error distribution. Our simulation-based approach models errors with a tν-distribution and, as new data arrives, we sequentially compute the marginal posterior distribution of the tail thickness. Our method naturally incorporates fat-tailed error distributions and can be extended to other data features such as stochastic volatility. We show that the sequential Bayes factor provides an optimal test of fat-tails versus normality. We provide an empirical and theoretical analysis of the rate of learning of tail thickness under a default Jeffreys prior. We illustrate our sequential methodology on the British pound/U.S. dollar daily exchange rate data and on data from the 2008–2009 credit crisis using daily S&P500 returns. Our method naturally extends to multivariate and dynamic panel data.  相似文献   

16.
Abstract. In this paper, we consider two kinds of collapsibility, that is, the model‐collapsibility and the estimate‐collapsibility, of conditional graphical models for multidimensional contingency tables. We show that these two definitions are equivalent, and propose a sufficient and necessary condition for them in terms of the interaction graph, which allows the collapsibility to be characterized and judged intuitively and conveniently.  相似文献   

17.
The k largest order statistics in a random sample from a common heavy‐tailed parent distribution with a regularly varying tail can be characterized as Fréchet extremes. This paper establishes that consecutive ratios of such Fréchet extremes are mutually independent and distributed as functions of beta random variables. The maximum likelihood estimator of the tail index based on these ratios is derived, and the exact distribution of the maximum likelihood estimator is determined for fixed k, and the asymptotic distribution as k →∞ . Inferential procedures based upon the maximum likelihood estimator are shown to be optimal. The Fréchet extremes are not directly observable, but a feasible version of the maximum likelihood estimator is equivalent to Hill's statistic. A simple diagnostic is presented that can be used to decide on the largest value of k for which an assumption of Fréchet extremes is sustainable. The results are illustrated using data on commercial insurance claims arising from fires and explosions, and from hurricanes.  相似文献   

18.
Through an investigation of normal curvature functions for influence graphs of a family of perturbed models, we develop the concept of local conditional influence. This concept can be used to study masking and boosting effects in local influence. We identify the situation under which the influence graph of the unperturbed model contains all the information on these effects. The linear regression model is used for illustration and it is shown that the concept developed is consistent with Lawrance's (1995) approach of conditional influence in Cook's distance.  相似文献   

19.
A framework is described for organizing and understanding the computations necessary to obtain the posterior mean of a vector of linear effects in a normal linear model, conditional on the parameters that determine covariance structure. The approach has two major uses; firstly, as a pedagogical tool in the derivation of formulae, and secondly, as a practical tool for developing computational strategies without needing complicated matrix formulae that are often unwieldy in complex hierarchical models. The proposed technique is based upon symbolic application of the sweep operator SWP to an appropriate tableau of means and covariances. The method is illustrated with standard linear model specifications, including the so-called mixed model, with both fixed and random effects.  相似文献   

20.
Observations collected over time are often autocorrelated rather than independent, and sometimes include observations below or above detection limits (i.e. censored values reported as less or more than a level of detection) and/or missing data. Practitioners commonly disregard censored data cases or replace these observations with some function of the limit of detection, which often results in biased estimates. Moreover, parameter estimation can be greatly affected by the presence of influential observations in the data. In this paper we derive local influence diagnostic measures for censored regression models with autoregressive errors of order p (hereafter, AR(p)‐CR models) on the basis of the Q‐function under three useful perturbation schemes. In order to account for censoring in a likelihood‐based estimation procedure for AR(p)‐CR models, we used a stochastic approximation version of the expectation‐maximisation algorithm. The accuracy of the local influence diagnostic measure in detecting influential observations is explored through the analysis of empirical studies. The proposed methods are illustrated using data, from a study of total phosphorus concentration, that contain left‐censored observations. These methods are implemented in the R package ARCensReg.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号