首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dynamic semiparametric factor models (DSFM) simultaneously smooth in space and are parametric in time, approximating complex dynamic structures by time invariant basis functions and low dimensional time series. In contrast to traditional dimension reduction techniques, DSFM allows the access of the dynamics embedded in high dimensional data through the lower dimensional time series. In this paper, we study the time behavior of risk assessments from investors facing random financial payoffs. We use DSFM to estimate risk neutral densities from a dataset of option prices on the German stock index DAX. The dynamics and term structure of risk neutral densities are investigated by Vector Autoregressive (VAR) methods applied on the estimated lower dimensional time series.  相似文献   

2.
Measures of association are often used to describe the relationship between row and column variables in two—dimensional contingency tables. It is not uncommon in biomedical research to categorize continuous variables to obtain a two—dimensional table. In these situations it is desirable that the measure of association not be too sensitive to changes in the number of categories or to the choice of cut points. To accomplish this objective we attempt to find a measure of association that closely approximates the corresponding measure of association for the underlying distribution.Measures that are close to the underlying measure for various table sizes andcutpoints are called stable measures.  相似文献   

3.
Stable distributions are an important class of infinitely divisible probability distributions, of which two special cases are the Cauchy distribution and the normal distribution. Aside from a few special cases, the density function for stable distributions has no known analytic form and is expressible only through the variate’s characteristic function or other integral forms. In this paper, we present numerical schemes for evaluating the density function for stable distributions, its gradient, and distribution function in various parameter regimes of interest, some of which had no preexisting efficient method for their computation. The novel evaluation schemes consist of optimized generalized Gaussian quadrature rules for integral representations of the density function, complemented by asymptotic expansions near various values of the shape and argument parameters. We report several numerical examples illustrating the efficiency of our methods. The resulting code has been made available online.  相似文献   

4.
Bayesian classification of Neolithic tools   总被引:1,自引:0,他引:1  
The classification of Neolithic tools by using cluster analysis enables archaeologists to understand the function of the tools and the technological and cultural conditions of the societies that made them. In this paper, Bayesian classification is adopted to analyse data which raise the question whether the observed variability, e.g. the shape and dimensions of the tools, is related to their use. The data present technical difficulties for the practitioner, such as the presence of mixed mode data, missing data and errors in variables. These complications are overcome by employing a finite mixture model and Markov chain Monte Carlo methods. The analysis uses prior information which expresses the archaeologist's belief that there are two tool groups that are similar to contemporary adzes and axes. The resulting mixing densities provide evidence that the morphological dimensional variability among tools is related to the existence of these two tool groups.  相似文献   

5.
With the growing availability of high-frequency data, long memory has become a popular topic in finance research. Fractionally Integrated GARCH (FIGARCH) model is a standard approach to study the long memory of financial volatility. The original specification of FIGARCH model is developed using Normal distribution, which cannot accommodate fat-tailed properties commonly existing in financial time series. Traditionally, the Student-t distribution and General Error Distribution (GED) are used instead to solve that problem. However, a recent study points out that the Student-t lacks stability. Instead, the Stable distribution is introduced. The issue of this distribution is that its second moment does not exist. To overcome this new problem, the tempered stable distribution, which retains most attractive characteristics of the Stable distribution and has defined moments, is a natural candidate. In this paper, we describe the estimation procedure of the FIGARCH model with tempered stable distribution and conduct a series of simulation studies to demonstrate that it consistently outperforms FIGARCH models with the Normal, Student-t and GED distributions. An empirical evidence of the S&P 500 hourly return is also provided with robust results. Therefore, we argue that the tempered stable distribution could be a widely useful tool for modelling the high-frequency financial volatility in general contexts with a FIGARCH-type specification.  相似文献   

6.
This paper is concerned with the stable feature screening for the ultrahigh dimensional data. To deal with the ultrahigh dimensional data problem and screen the important features, a set-averaging measurement is proposed. The model averaging technique and the conditional quantile method are used to construct the weighted set-averaging feature screening procedure to identify the relationships between the possible predictors and the response variable. The proposed screening method is model free, stable and possesses the sure screening property under some regular conditions. Some Monte Carlo simulations and a real data application are conducted to evaluate the performance of the proposed procedure.  相似文献   

7.
We will pursue a Bayesian nonparametric approach in the hierarchical mixture modelling of lifetime data in two situations: density estimation, when the distribution is a mixture of parametric densities with a nonparametric mixing measure, and accelerated failure time (AFT) regression modelling, when the same type of mixture is used for the distribution of the error term. The Dirichlet process is a popular choice for the mixing measure, yielding a Dirichlet process mixture model for the error; as an alternative, we also allow the mixing measure to be equal to a normalized inverse-Gaussian prior, built from normalized inverse-Gaussian finite dimensional distributions, as recently proposed in the literature. Markov chain Monte Carlo techniques will be used to estimate the predictive distribution of the survival time, along with the posterior distribution of the regression parameters. A comparison between the two models will be carried out on the grounds of their predictive power and their ability to identify the number of components in a given mixture density.  相似文献   

8.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

9.
We extend the family of multivariate generalized linear mixed models to include random effects that are generated by smooth densities. We consider two such families of densities, the so-called semi-nonparametric (SNP) and smooth nonparametric (SMNP) densities. Maximum likelihood estimation, under either the SNP or the SMNP densities, is carried out using a Monte Carlo EM algorithm. This algorithm uses rejection sampling and automatically increases the MC sample size as it approaches convergence. In a simulation study we investigate the performance of these two densities in capturing the true underlying shape of the random effects distribution. We also examine the implications of misspecification of the random effects distribution on the estimation of the fixed effects and their standard errors. The impact of the assumed random effects density on the estimation of the random effects themselves is investigated in a simulation study and also in an application to a real data set.  相似文献   

10.
We find the asymptotic distribution of the multi‐dimensional multi‐scale and kernel estimators for high‐frequency financial data with microstructure. Sampling times are allowed to be asynchronous and endogenous. In the process, we show that the classes of multi‐scale and kernel estimators for smoothing noise perturbation are asymptotically equivalent in the sense of having the same asymptotic distribution for corresponding kernel and weight functions. The theory leads to multi‐dimensional stable central limit theorems and feasible versions. Hence, they allow to draw statistical inference for a broad class of multivariate models, which paves the way to tests and confidence intervals in risk measurement for arbitrary portfolios composed of high‐frequently observed assets. As an application, we enhance the approach to construct a test for investigating hypotheses that correlated assets are independent conditional on a common factor.  相似文献   

11.
For sampling from a normal population with unknown mean, two families of prior densities for the mean are discussed. The corresponding posterior densities are found. A data analyst may choose a prior from these families to represent prior beliefs and then compute the corresponding Bayes estimator, using the techniques discussed.  相似文献   

12.
Using the ‘grouping vector’ notion and employing a Dirichlet prior to the unknown mixing parameters viz., the unknown mixing proportiona, the Bayee estimates of the mixing proportions in finite mixtures of known distributions are obtained. These estimates are based on the optimal grouping of the sample data. An algorithm is proposed to obtain the optimal grouping of the eample observations when the component densities belong to the family of densities possessing the monotone likelihood ratio property. A numerical study is carried out for the case of mixtures of two normal densities.  相似文献   

13.
Given two independent samples of size n and m drawn from univariate distributions with unknown densities f and g, respectively, we are interested in identifying subintervals where the two empirical densities deviate significantly from each other. The solution is built by turning the nonparametric density comparison problem into a comparison of two regression curves. Each regression curve is created by binning the original observations into many small size bins, followed by a suitable form of root transformation to the binned data counts. Turned as a regression comparison problem, several nonparametric regression procedures for detection of sparse signals can be applied. Both multiple testing and model selection methods are explored. Furthermore, an approach for estimating larger connected regions where the two empirical densities are significantly different is also derived, based on a scale-space representation. The proposed methods are applied on simulated examples as well as real-life data from biology.  相似文献   

14.
This paper is concerned with testing the equality of two high‐dimensional spatial sign covariance matrices with applications to testing the proportionality of two high‐dimensional covariance matrices. It is interesting that these two testing problems are completely equivalent for the class of elliptically symmetric distributions. This paper develops a new test for testing the equality of two high‐dimensional spatial sign covariance matrices based on the Frobenius norm of the difference between two spatial sign covariance matrices. The asymptotic normality of the proposed testing statistic is derived under the null and alternative hypotheses when the dimension and sample sizes both tend to infinity. Moreover, the asymptotic power function is also presented. Simulation studies show that the proposed test performs very well in a wide range of settings and can be allowed for the case of large dimensions and small sample sizes.  相似文献   

15.
A numerical procedure is outlined for obtaining the distance between samples from two populations. First, the probability densities in the two populations are estimated by kernel methods, and then the distance is derived by numerical integration of a suitable function of these densities. Various such functions have been proposed in the past; they are all implemented and compared with each other and with Mahalanobis D 2 on several real and simulated data sets. The results show the method to be viable, and to perform well against the Mahalanobis D 2 standard.  相似文献   

16.
Gaussian mixture model-based clustering is now a standard tool to determine a hypothetical underlying structure in continuous data. However, many usual parsimonious models, despite either their appealing geometrical interpretation or their ability to deal with high dimensional data, suffer from major drawbacks due to scale dependence or unsustainability of the constraints after projection. In this work we present a new family of parsimonious Gaussian models based on a variance-correlation decomposition of the covariance matrices. These new models are stable when projected into the canonical planes and, so, faithfully representable in low dimension. They are also stable by modification of the measurement units of the data and such a modification does not change the model selection based on likelihood criteria. We highlight all these stability properties by a specific graphical representation of each model. A detailed Generalized EM (GEM) algorithm is also provided for every model inference. Then, on biological and geological data, we compare our stable models to standard ones (geometrical models and factor analyzer models), which underlines all the profit to obtain unit-free models.  相似文献   

17.
For high dimensional data, the SigClust is developed for testing the significance of clustering. The cluster index (CI) for SigClust is conducted by the ratio of the within-cluster and total sum of squares. But its empirical size is too conservative to be over controlled. By removing the cumbrous terms in the CI, an improved index (BCI) is proposed in this paper. The coefficient of variation of the BCI can be significantly reduced, implying that the new index BCI is stable. Moreover, the new significance test (NewSig) maintains the size, meanwhile, provides a greater power. Simulation experiments and two real cancer data examples are analysed for illustrating the performance of the new methodology.  相似文献   

18.
For a sample from a given distribution the difference of two order statistics and the Studentized quantile are statistics whose distribution is needed to obtain tests and confidence intervals for quantiles and quantile differences. This paper gives saddlepoint approximations for densities and saddlepoint approximations of the Lugannani–Rice form for tail probabilities of these statistics. The relative errors of the approximations are n −1 uniformly in a neighbourhood of the parameters and this uniformity is global if the densities are log-concave.  相似文献   

19.
Consider the model of k populations whose densities are nonreg-ular in the sense that they involve one or two unknown truncation parameters. In this paper a unified treatment of the problem of Bahadur efficiency of the likelihood ratio test for such a model is presented. The Bahadur efficiency of a certain test based on the union-intersection principle is also studied. Some of these results are then extended to a larger class of nonregular densities.  相似文献   

20.
Abstract. We consider the problem of efficiently estimating multivariate densities and their modes for moderate dimensions and an abundance of data. We propose polynomial histograms to solve this estimation problem. We present first‐ and second‐order polynomial histogram estimators for a general d‐dimensional setting. Our theoretical results include pointwise bias and variance of these estimators, their asymptotic mean integrated square error (AMISE), and optimal binwidth. The asymptotic performance of the first‐order estimator matches that of the kernel density estimator, while the second order has the faster rate of O(n?6/(d+6)). For a bivariate normal setting, we present explicit expressions for the AMISE constants which show the much larger binwidths of the second order estimator and hence also more efficient computations of multivariate densities. We apply polynomial histogram estimators to real data from biotechnology and find the number and location of modes in such data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号