首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

2.
《Econometric Reviews》2012,31(1):1-26
Abstract

This paper proposes a nonparametric procedure for testing conditional quantile independence using projections. Relative to existing smoothed nonparametric tests, the resulting test statistic: (i) detects the high frequency local alternatives that converge to the null hypothesis in probability at faster rate and, (ii) yields improvements in the finite sample power when a large number of variables are included under the alternative. In addition, it allows the researcher to include qualitative information and, if desired, direct the test against specific subsets of alternatives without imposing any functional form on them. We use the weighted Nadaraya-Watson (WNW) estimator of the conditional quantile function avoiding the boundary problems in estimation and testing and prove weak uniform consistency (with rate) of the WNW estimator for absolutely regular processes. The procedure is applied to a study of risk spillovers among the banks. We show that the methodology generalizes some of the recently proposed measures of systemic risk and we use the quantile framework to assess the intensity of risk spillovers among individual financial institutions.  相似文献   

3.
Abstract

ROC curve is a fundamental evaluation tool in medical researches and survival analysis. The estimation of ROC curve has been studied extensively with complete data and right-censored survival data. However, these methods are not suitable to analyze the length-biased and right-censored data. Since this kind of data includes the auxiliary information that truncation time and residual time share the same distribution, the two new estimators for the ROC curve are proposed by taking into account this auxiliary information to improve estimation efficiency. Numerical simulation studies with different assumed cases and real data analysis are conducted.  相似文献   

4.
The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aims to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya–Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. We ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test which is based on a linear mean-regression, we find that VIX index predicts excess returns both at short and long horizons.  相似文献   

5.
We consider a nonparametric autoregression model under conditional heteroscedasticity with the aim to test whether the innovation distribution changes in time. To this end, we develop an asymptotic expansion for the sequential empirical process of nonparametrically estimated innovations (residuals). We suggest a Kolmogorov–Smirnov statistic based on the difference of the estimated innovation distributions built from the first ?ns?and the last n ? ?ns? residuals, respectively (0 ≤ s ≤ 1). Weak convergence of the underlying stochastic process to a Gaussian process is proved under the null hypothesis of no change point. The result implies that the test is asymptotically distribution‐free. Consistency against fixed alternatives is shown. The small sample performance of the proposed test is investigated in a simulation study and the test is applied to a data example.  相似文献   

6.
This article addresses the density estimation problem using nonparametric Bayesian approach. It is considered hierarchical mixture models where the uncertainty about the mixing measure is modeled using the Dirichlet process. The main goal is to build more flexible models for density estimation. Extensions of the normal mixture model via Dirichlet process previously introduced in the literature are twofold. First, Dirichlet mixtures of skew-normal distributions are considered, say, in the first stage of the hierarchical model, the normal distribution is replaced by the skew-normal one. We also assume a skew-normal distribution as the center measure in the Dirichlet mixture of normal distributions. Some important results related to Bayesian inference in the location-scale skew-normal family are introduced. In particular, we obtain the stochastic representations for the full conditional distributions of the location and skewness parameters. The algorithm introduced by MacEachern and Müller in 1998 MacEachern, S.N., Müller, P. (1998). Estimating mixture of Dirichlet Process models. J. Computat. Graph. Statist. 7(2):223238.[Taylor & Francis Online], [Web of Science ®] [Google Scholar] is used to sample from the posterior distributions. The models are compared considering simulated data sets. Finally, the well-known Old Faithful Geyser data set is analyzed using the proposed models and the Dirichlet mixture of normal distributions. The model based on Dirichlet mixture of skew-normal distributions captured the data bimodality and skewness shown in the empirical distribution.  相似文献   

7.
An approximation to the exact distribution of the Wilcoxon rank sum test (Mann-Whitney U-test) and the Siegel-Tukey test based on a linear combination of the two-sample t-test applied to ranks and the normal approximation is compared with the usual normal approximation. The normal approximation results in a conservative test in the tails while the linear combination of the test statistics provides a test that has a very high percentage of agreement with tables of the exact distribution. Sample sizes 3≤m, n≤50 were considered.  相似文献   

8.
The problem of comparing two independent groups of univariate data in the sense of testing for equivalence is considered for a fully nonparametric setting. The distribution of the data within each group may be a mixture of both a continuous and a discrete component, and no assumptions are made regarding the way in which the distributions of the two groups of data may differ from each other – in particular, the assumption of a shift model is avoided. The proposed equivalence testing procedure for this scenario refers to the median of the independent difference distribution, i.e. to the median of the differences between independent observations from the test group and the reference group, respectively. The procedure provides an asymptotic equivalence test, which is symmetric with respect to the roles of ‘test’ and ‘reference’. It can be described either as a two‐one‐sided‐tests (TOST) approach, or equivalently as a confidence interval inclusion rule. A one‐sided variant of the approach can be applied analogously to non‐inferiority testing problems. The procedure may be generalised to equivalence testing with respect to quantiles other than the median, and is closely related to tolerance interval type inference. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Nonparametric estimators of the upper boundary of the support of a multivariate distribution are very appealing because they rely on very few assumptions. But in productivity and efficiency analysis, this upper boundary is a production (or a cost) frontier and a parametric form for it allows for a richer economic interpretation of the production process under analysis. On the other hand, most of the parametric approaches rely on often too restrictive assumptions on the stochastic part of the model and are based on standard regression techniques fitting the shape of the center of the cloud of points rather than its boundary. To overcome these limitations, Florens and Simar [2005. Parametric approximations of nonparametric frontiers. J. Econometrics 124 (1), 91–116] propose a two-stage approach which tries to capture the shape of the cloud of points near its frontier by providing parametric approximations of a nonparametric frontier. In this paper we propose an alternative method using the nonparametric quantile-type frontiers introduced in Aragon, Daouia and Thomas-Agnan [2005. Nonparametric frontier estimation: a conditional quantile-based approach. Econometric Theory 21, 358–389] for the nonparametric part of our model. These quantile-type frontiers have the superiority of being more robust to extremes. Our main result concerns the functional convergence of the quantile-type frontier process. Then we provide convergence and asymptotic normality of the resulting estimators of the parametric approximation. The approach is illustrated through simulated and real data sets.  相似文献   

10.
Change-point approach to data analytic wavelet thresholding   总被引:3,自引:0,他引:3  
Previous proposals in data dependent wavelet threshold selection have used only the magnitudes of the wavelet coefficients in choosing a threshold for each level. Since a jump (or other unusual feature) in the underlying function results in several non-zero coefficients which are adjacent to each other, it is possible to use change-point approaches to take advantage of the information contained in the relative position of the coefficients as well as their magnitudes. The method introduced here represents an initial step in wavelet thresholding when coefficients are kept in the original order.  相似文献   

11.
The study focuses on the selection of the order of a general time series process via the conditional density of the latter, a characteristic of which is that it remains constant for every order beyond the true one. Using simulated time series from various nonlinear models we illustrate how this feature can be traced from conditional density estimation. We study whether two statistics derived from the likelihood function can serve as univariate statistics to determine the order of the process. It is found that a weighted version of the log likelihood function has desirable robust properties in detecting the order of the process.  相似文献   

12.
13.
A method for inducing a desired rank correlation matrix on a multivariate input random variable for use in a simulation study is introduced in this paper. This method is simple to use, is distribution free, preserves the exact form of the marginal distributions on the input variables, and may be used with any type of sampling scheme for which correlation of input variables is a meaningful concept. A Monte Carlo study provides an estimate of the bias and variability associated with the method. Input variables used in a model for study of geologic disposal of radioactive waste provide an example of the usefulness of this procedure. A textbook example shows how the output may be affected by the method presented in this paper.  相似文献   

14.
An empirical Bayes approach to a variables acceptance sampling plan problem is presented and an empirical Bayes rule is developed which is shown to be asymptotically optimal under general conditions. The problem considered is one in which the ratio of the costs of accepting defective items and rejecting non-defective items is specified. Sampling costs are not considered and the size of the sample taken from each lot is fixed and constant. The empirical Bayes estimation of the Bayes rule is shown to require the estimation of a conditional probability. An estimator for conditional probabilities of the form needed is derived and shown to have good asymptotic properties.  相似文献   

15.
We describe a set of procedures that automate many algebraic calculations common in statistical asymptotic theory. The procedures are very general and serve to unify the study of likelihood and likelihood type functions. The procedures emulate techniques one would normally carry out by hand; this strategy is emphasised throughout the paper. The purpose of the software is to provide a practical alternative to difficult manual algebraic computations. The result is a method that is quick and free of clerical error.  相似文献   

16.
We propose a novel Dirichlet-based Pólya tree (D-P tree) prior on the copula and based on the D-P tree prior, a nonparametric Bayesian inference procedure. Through theoretical analysis and simulations, we are able to show that the flexibility of the D-P tree prior ensures its consistency in copula estimation, thus able to detect more subtle and complex copula structures than earlier nonparametric Bayesian models, such as a Gaussian copula mixture. Furthermore, the continuity of the imposed D-P tree prior leads to a more favourable smoothing effect in copula estimation over classic frequentist methods, especially with small sets of observations. We also apply our method to the copula prediction between the S&P 500 index and the IBM stock prices during the 2007–08 financial crisis, finding that D-P tree-based methods enjoy strong robustness and flexibility over classic methods under such irregular market behaviours.  相似文献   

17.
The aim of our paper is to elaborate a theoretical methodology based on the Malliavin calculus to calculate the following conditional expectation (Pt(Xt)|(Xs)) for st where the only state variable follows a J-process [Jerbi Y. A new closed-form solution as an extension of the Black—Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458]. The theoretical results are applied to the American option pricing, consisting of an extension of the work of Bally et al. [Pricing and hedging American options by Monte Carlo methods using a Malliavin calculus approach. Monte Carlo Methods Appl. 2005;11-2:97–133], as well as the J-process (with additional parameters λ and θ) is an extension of the Wiener process. The introduction of the aforesaid parameters induces skewness and kurtosis effects, i.e. smile curve allowing to fit with the reality of financial market. In his work Jerbi [Jerbi Y. A new closed-form solution as an extension of the Black–-Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458] showed that the use of the J-process is equivalent to the use of a stochastic volatility model based on the Wiener process as in Heston's. The present work consists on extending this result to the American options. We studied the influence of the parameters λ and θ on the American option price and we find empirical results fitting with the options theory.  相似文献   

18.
19.
One of the standard problems in statistics consists of determining the relationship between a response variable and a single predictor variable through a regression function. Background scientific knowledge is often available that suggests that the regression function should have a certain shape (e.g. monotonically increasing or concave) but not necessarily a specific parametric form. Bernstein polynomials have been used to impose certain shape restrictions on regression functions. The Bernstein polynomials are known to provide a smooth estimate over equidistant knots. Bernstein polynomials are used in this paper due to their ease of implementation, continuous differentiability, and theoretical properties. In this work, we demonstrate a connection between the monotonic regression problem and the variable selection problem in the linear model. We develop a Bayesian procedure for fitting the monotonic regression model by adapting currently available variable selection procedures. We demonstrate the effectiveness of our method through simulations and the analysis of real data.  相似文献   

20.
A Bayesian approach is considered to study the change point problems. A hypothesis for testing change versus no change is considered using the notion of predictive distributions. Bayes factors are developed for change versus no change in the exponential families of distributions with conjugate priors. Under vague prior information, both Bayes factors and pseudo Bayes factors are considered. A new result is developed which describes how the overall Bayes factor has a decomposition into Bayes factors at each point. Finally, an example is provided in which the computations are performed using the concept of imaginary observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号