首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
In the case of prior knowledge about the unknown parameter, the Bayesian predictive density coincides with the Bayes estimator for the true density in the sense of the Kullback-Leibler divergence, but this is no longer true if we consider another loss function. In this paper we present a generalized Bayes rule to obtain Bayes density estimators with respect to any α-divergence, including the Kullback-Leibler divergence and the Hellinger distance. For curved exponential models, we study the asymptotic behaviour of these predictive densities. We show that, whatever prior we use, the generalized Bayes rule improves (in a non-Bayesian sense) the estimative density corresponding to a bias modification of the maximum likelihood estimator. It gives rise to a correspondence between choosing a prior density for the generalized Bayes rule and fixing a bias for the maximum likelihood estimator in the classical setting. A criterion for comparing and selecting prior densities is also given.  相似文献   

2.
Within the context of the multiviriate general linear model, and using a Bayesian formulation and Kullback-Leibler divergences this paper provides a framework and the resultant methods for the problem of detecting and characterizing influential subsets of observations when the goal is to estimate parameters. It is further indicated how these influence measures inherently depend upon one's exact estimative intent. The relationship to previous work on observations influential in estimation is discussed. The estimative influence measures obtained here are also compared with predictive influence functions previously obtained. Several examples are presented illustrating the methodology.  相似文献   

3.
Summary.  We consider the analysis of extreme shapes rather than the more usual mean- and variance-based shape analysis. In particular, we consider extreme shape analysis in two applications: human muscle fibre images, where we compare healthy and diseased muscles, and temporal sequences of DNA shapes from molecular dynamics simulations. One feature of the shape space is that it is bounded, so we consider estimators which use prior knowledge of the upper bound when present. Peaks-over-threshold methods and maximum-likelihood-based inference are used. We introduce fixed end point and constrained maximum likelihood estimators, and we discuss their asymptotic properties for large samples. It is shown that in some cases the constrained estimators have half the mean-square error of the unconstrained maximum likelihood estimators. The new estimators are applied to the muscle and DNA data, and practical conclusions are given.  相似文献   

4.
The POT (Peaks-Over-Threshold) approach consists of using the generalized Pareto distribution (GPD) to approximate the distribution of excesses over thresholds. In this article, we establish the asymptotic normality of the well-known extreme quantile estimators based on this POT method, under very general assumptions. As an illustration, from this result, we deduce the asymptotic normality of the POT extreme quantile estimators in the case where the maximum likelihood (ML) or the generalized probability-weighted moments (GPWM) methods are used. Simulations are provided in order to compare the efficiency of these estimators based on ML or GPWM methods with classical ones proposed in the literature.  相似文献   

5.
The generalized Pareto distribution is used to model the exceedances over a threshold in a number of fields, including the analysis of environmental extreme events and financial data analysis. We use this model in a default Bayesian framework where no prior information is available on unknown model parameters. Using a large simulation study, we compare the performance of our posterior estimations of parameters with other methods proposed in the literature. We show that our procedure also allows to make inferences in other quantities of interest in extreme value analysis without asymptotic arguments. We apply the proposed methodology to a real data set.  相似文献   

6.
把极端分位数所具有的行为特征应用到VaR的研究中,建立上海股市收益率的条件分位数回归模型,描述其在极端分位数下的变化趋势。同时选取适当的尾部模型,并在此基础之上应用外推法预测非常极端分位数下的条件VaR,并与直接由分位数回归模型预测的结果进行比较。结果表明:两种方法得到的结果变化趋势都是一致的,由外推法预测的结果相对小一些。  相似文献   

7.
Multivariate extreme value statistical analysis is concerned with observations on several variables which are thought to possess some degree of tail dependence. The main approaches to inference for multivariate extremes consist in approximating either the distribution of block component‐wise maxima or the distribution of the exceedances over a high threshold. Although the expressions of the asymptotic density functions of these distributions may be characterized, they cannot be computed in general. In this paper, we study the case where the spectral random vector of the multivariate max‐stable distribution has known conditional distributions. The asymptotic density functions of the multivariate extreme value distributions may then be written through univariate integrals that are easily computed or simulated. The asymptotic properties of two likelihood estimators are presented, and the utility of the method is examined via simulation.  相似文献   

8.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

9.
Bivariate extreme value condition (see (1.1) below) includes the marginal extreme value conditions and the existence of the (extreme) dependence function. Two cases are of interest: asymptotic independence and asymptotic dependence. In this paper, we investigate testing the existence of the dependence function under the null hypothesis of asymptotic independence and present two suitable test statistics. Small simulations are studied and the application for a real data is shown. The other case with the null hypothesis of asymptotic dependence is already investigated.  相似文献   

10.
Non-parametric Estimation of Tail Dependence   总被引:4,自引:0,他引:4  
Abstract.  Dependencies between extreme events (extremal dependencies) are attracting an increasing attention in modern risk management. In practice, the concept of tail dependence represents the current standard to describe the amount of extremal dependence. In theory, multi-variate extreme-value theory turns out to be the natural choice to model the latter dependencies. The present paper embeds tail dependence into the concept of tail copulae which describes the dependence structure in the tail of multivariate distributions but works more generally. Various non-parametric estimators for tail copulae and tail dependence are discussed, and weak convergence, asymptotic normality, and strong consistency of these estimators are shown by means of a functional delta method. Further, weak convergence of a general upper-order rank-statistics for extreme events is investigated and the relationship to tail dependence is provided. A simulation study compares the introduced estimators and two financial data sets were analysed by our methods.  相似文献   

11.
A progressive hybrid censoring scheme is a mixture of type-I and type-II progressive censoring schemes. In this paper, we mainly consider the analysis of progressive type-II hybrid-censored data when the lifetime distribution of the individual item is the normal and extreme value distributions. Since the maximum likelihood estimators (MLEs) of these parameters cannot be obtained in the closed form, we propose to use the expectation and maximization (EM) algorithm to compute the MLEs. Also, the Newton–Raphson method is used to estimate the model parameters. The asymptotic variance–covariance matrix of the MLEs under EM framework is obtained by Fisher information matrix using the missing information and asymptotic confidence intervals for the parameters are then constructed. This study will end up with comparing the two methods of estimation and the asymptotic confidence intervals of coverage probabilities corresponding to the missing information principle and the observed information matrix through a simulation study, illustrated examples and real data analysis.  相似文献   

12.
We propose a vector generalized additive modeling framework for taking into account the effect of covariates on angular density functions in a multivariate extreme value context. The proposed methods are tailored for settings where the dependence between extreme values may change according to covariates. We devise a maximum penalized log‐likelihood estimator, discuss details of the estimation procedure, and derive its consistency and asymptotic normality. The simulation study suggests that the proposed methods perform well in a wealth of simulation scenarios by accurately recovering the true covariate‐adjusted angular density. Our empirical analysis reveals relevant dynamics of the dependence between extreme air temperatures in two alpine resorts during the winter season.  相似文献   

13.
Typically, in the brief discussion of Bayesian inferential methods presented at the beginning of calculus-based undergraduate or graduate mathematical statistics courses, little attention is paid to the process of choosing the parameter value(s) for the prior distribution. Even less attention is paid to the impact of these choices on the predictive distribution of the data. Reasons for this include that the posterior can be found by ignoring the predictive distribution thereby streamlining the derivation of the posterior and/or that computer software can be used to find the posterior distribution. In this paper, the binomial, negative-binomial and Poisson distributions along with their conjugate beta and gamma priors are utilized to obtain the resulting predictive distributions. It is then demonstrated that specific choices of the parameters of the priors can lead to predictive distributions with properties that might be surprising to a non-expert user of Bayesian methods.  相似文献   

14.
On Smooth Statistical Tail Functionals   总被引:4,自引:0,他引:4  
Many estimators of the extreme value index of a distribution function F that are based on a certain number k n of largest order statistics can be represented as a statistical tail function al, that is a functional T applied to the empirical tail quantile function Q n. We study the asymptotic behaviour of such estimators with a scale and location invariant functional T under weak second order conditions on F . For that purpose first a new approximation of the empirical tail quantile function is established. As a consequence we obtain weak consistency and asymptotic normality of T ( Q n) if T is continuous and Hadamard differentiable, respectively, at the upper quantile function of a generalized Pareto distribution and k pn tends to infinity sufficiently slowly. Then we investigate the asymptotic variance and bias. In particular, those functionals T re characterized that lead to an estimator with minimal asymptotic variance. Finally, we introduce a method to construct estimators of the extreme value index with a made-to-order asymptotic behaviour  相似文献   

15.
Identifying important biomarkers that are predictive for cancer patients’ prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization. The recently developed variable screening methods, though powerful in many practical setting, fail to incorporate prior information on the importance of each biomarker and are less powerful in detecting marginally weak while jointly important signals. We propose a new conditional screening method for survival outcome data by computing the marginal contribution of each biomarker given priorily known biological information. This is based on the premise that some biomarkers are known to be associated with disease outcomes a priori. Our method possesses sure screening properties and a vanishing false selection rate. The utility of the proposal is further confirmed with extensive simulation studies and analysis of a diffuse large B-cell lymphoma dataset. We are pleased to dedicate this work to Jack Kalbfleisch, who has made instrumental contributions to the development of modern methods of analyzing survival data.  相似文献   

16.
Massive correlated data with many inputs are often generated from computer experiments to study complex systems. The Gaussian process (GP) model is a widely used tool for the analysis of computer experiments. Although GPs provide a simple and effective approximation to computer experiments, two critical issues remain unresolved. One is the computational issue in GP estimation and prediction where intensive manipulations of a large correlation matrix are required. For a large sample size and with a large number of variables, this task is often unstable or infeasible. The other issue is how to improve the naive plug-in predictive distribution which is known to underestimate the uncertainty. In this article, we introduce a unified framework that can tackle both issues simultaneously. It consists of a sequential split-and-conquer procedure, an information combining technique using confidence distributions (CD), and a frequentist predictive distribution based on the combined CD. It is shown that the proposed method maintains the same asymptotic efficiency as the conventional likelihood inference under mild conditions, but dramatically reduces the computation in both estimation and prediction. The predictive distribution contains comprehensive information for inference and provides a better quantification of predictive uncertainty as compared with the plug-in approach. Simulations are conducted to compare the estimation and prediction accuracy with some existing methods, and the computational advantage of the proposed method is also illustrated. The proposed method is demonstrated by a real data example based on tens of thousands of computer experiments generated from a computational fluid dynamic simulator.  相似文献   

17.
In this work we address the problem of the construction of prediction regions and distribution functions, with particular regard to the multidimensional setting. Firstly, we define a simple procedure for calculating the predictive distribution function which gives improved prediction limits. Secondly, with a multivariate generalization of a result presented in Ueki and Fueda (2007), we propose a method for correcting estimative prediction regions, to reduce their coverage error to the third-order accuracy. The improved prediction regions and the associated distribution functions are easy to calculate using a suitable bootstrap procedure. Examples of application are included, showing the good performance of the proposed method, even if we consider an approximated model for prediction purposes.  相似文献   

18.
This article describes a convenient method of selecting Metropolis– Hastings proposal distributions for multinomial logit models. There are two key ideas involved. The first is that multinomial logit models have a latent variable representation similar to that exploited by Albert and Chib (J Am Stat Assoc 88:669–679, 1993) for probit regression. Augmenting the latent variables replaces the multinomial logit likelihood function with the complete data likelihood for a linear model with extreme value errors. While no conjugate prior is available for this model, a least squares estimate of the parameters is easily obtained. The asymptotic sampling distribution of the least squares estimate is Gaussian with known variance. The second key idea in this paper is to generate a Metropolis–Hastings proposal distribution by conditioning on the estimator instead of the full data set. The resulting sampler has many of the benefits of so-called tailored or approximation Metropolis–Hastings samplers. However, because the proposal distributions are available in closed form they can be implemented without numerical methods for exploring the posterior distribution. The algorithm converges geometrically ergodically, its computational burden is minor, and it requires minimal user input. Improvements to the sampler’s mixing rate are investigated. The algorithm is also applied to partial credit models describing ordinal item response data from the 1998 National Assessment of Educational Progress. Its application to hierarchical models and Poisson regression are briefly discussed.  相似文献   

19.
The problem of estimating standard errors for diagnostic accuracy measures might be challenging for many complicated models. We can address such a problem by using the Bootstrap methods to blunt its technical edge with resampled empirical distributions. We consider two cases where bootstrap methods can successfully improve our knowledge of the sampling variability of the diagnostic accuracy estimators. The first application is to make inference for the area under the ROC curve resulted from a functional logistic regression model which is a sophisticated modelling device to describe the relationship between a dichotomous response and multiple covariates. We consider using this regression method to model the predictive effects of multiple independent variables on the occurrence of a disease. The accuracy measures, such as the area under the ROC curve (AUC) are developed from the functional regression. Asymptotical results for the empirical estimators are provided to facilitate inferences. The second application is to test the difference of two weighted areas under the ROC curve (WAUC) from a paired two sample study. The correlation between the two WAUC complicates the asymptotic distribution of the test statistic. We then employ the bootstrap methods to gain satisfactory inference results. Simulations and examples are supplied in this article to confirm the merits of the bootstrap methods.  相似文献   

20.
The ranked set sampling (RSS) method as suggested by McIntyre (1952) may be modified to come up with new sampling methods that can be made more efficient than the usual RSS method. Two such modifications, namely extreme and median ranked set sampling methods, are considered in this study. These two methods are generally easier to use in the field and less prone to problems resulting from errors in ranking. Two regression-type estimators based on extreme ranked set sampling (ERSS) and median ranked set sampling (MRSS) for estimating the population mean of the variable of interest are considered in this study and compared with the regression-type estimators based on RSS suggested by Yu & Lam (1997). It turned out that when the variable of interest and the concomitant variable jointly followed a bivariate normal distribution, the regression-type estimator of the population mean based on ERSS dominates all other estimators considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号