首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article proposes a new class of copula-based dynamic models for high-dimensional conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high-dimensional covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enables the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008–2009, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the precrisis period. Supplementary materials for this article are available online.  相似文献   

2.
This article introduces a semiparametric autoregressive conditional heteroscedasticity (ARCH) model that has conditional first and second moments given by autoregressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator. For several particular conditional densities, the relative efficiency of the quasi-maximum likelihood estimator is compared with maximum likelihood under correct specification. These potential efficiency gains for a fully adaptive procedure are compared in a Monte Carlo experiment with the observed gains from using the proposed semiparametric procedure, and it is found that the estimator captures a substantial proportion of the potential. The estimator is applied to daily stock returns from small firms that are found to exhibit conditional skewness and kurtosis and to the British pound to dollar exchange rate.  相似文献   

3.
In this paper, regressive models are proposed for modeling a sequence of transitions in longitudinal data. These models are employed to predict the future status of the outcome variable of the individuals on the basis of their underlying background characteristics or risk factors. The estimation of parameters and also estimates of conditional and unconditional probabilities are shown for repeated measures. The goodness of fit tests are extended in this paper on the basis of the deviance and the Hosmer–Lemeshow procedures and generalized to repeated measures. In addition, to measure the suitability of the proposed models for predicting the disease status, we have extended the ROC curve approach to repeated measures. The procedure is shown for the conditional models for any order as well as for the unconditional model, to predict the outcome at the end of the study. The test procedures are also suggested. For testing the differences between areas under the ROC curves in subsequent follow-ups, two different test procedures are employed, one of which is based on permutation test. In this paper, an unconditional model is proposed on the basis of conditional models for the disease progression of depression among the elderly population in the USA on the basis of the Health and Retirement Survey data collected longitudinally. The illustration shows that the disease progression observed conditionally can be employed to predict the outcome and the role of selected variables and the previous outcomes can be utilized for predictive purposes. The results show that the percentage of correct predictions of a disease is quite high and the measures of sensitivity and specificity are also reasonably impressive. The extended measures of area under the ROC curve show that the models provide a reasonably good fit in terms of predicting the disease status during a long period of time. This procedure will have extensive applications in the field of longitudinal data analysis where the objective is to obtain estimates of unconditional probabilities on the basis of series of conditional transitional models.  相似文献   

4.
Bootstrapping the conditional copula   总被引:1,自引:0,他引:1  
This paper is concerned with inference about the dependence or association between two random variables conditionally upon the given value of a covariate. A way to describe such a conditional dependence is via a conditional copula function. Nonparametric estimators for a conditional copula then lead to nonparametric estimates of conditional association measures such as a conditional Kendall's tau. The limiting distributions of nonparametric conditional copula estimators are rather involved. In this paper we propose a bootstrap procedure for approximating these distributions and their characteristics, and establish its consistency. We apply the proposed bootstrap procedure for constructing confidence intervals for conditional association measures, such as a conditional Blomqvist beta and a conditional Kendall's tau. The performances of the proposed methods are investigated via a simulation study involving a variety of models, ranging from models in which the dependence (weak or strong) on the covariate is only through the copula and not through the marginals, to models in which this dependence appears in both the copula and the marginal distributions. As a conclusion we provide practical recommendations for constructing bootstrap-based confidence intervals for the discussed conditional association measures.  相似文献   

5.
《Econometric Reviews》2012,31(1):1-26
Abstract

This paper proposes a nonparametric procedure for testing conditional quantile independence using projections. Relative to existing smoothed nonparametric tests, the resulting test statistic: (i) detects the high frequency local alternatives that converge to the null hypothesis in probability at faster rate and, (ii) yields improvements in the finite sample power when a large number of variables are included under the alternative. In addition, it allows the researcher to include qualitative information and, if desired, direct the test against specific subsets of alternatives without imposing any functional form on them. We use the weighted Nadaraya-Watson (WNW) estimator of the conditional quantile function avoiding the boundary problems in estimation and testing and prove weak uniform consistency (with rate) of the WNW estimator for absolutely regular processes. The procedure is applied to a study of risk spillovers among the banks. We show that the methodology generalizes some of the recently proposed measures of systemic risk and we use the quantile framework to assess the intensity of risk spillovers among individual financial institutions.  相似文献   

6.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

7.
Using Monte Carlo simulation, we compare the performance of five asymptotic test procedures and a randomized permutation test procedure for testing the homogeneity of odds ratio under the stratified matched-pair design. We note that the weighted-least-square test procedure is liberal, while Pearson's goodness-of-fit (PGF) test procedure with the continuity correction is conservative. We note that PGF without the continuity correction, the conditional likelihood ratio test procedure, and the randomized permutation test procedure can generally perform well with respect to Type I error. We use the data taken from a case–control study regarding the endometrial cancer incidence published elsewhere to illustrate the use of these test procedures.  相似文献   

8.
Correction for heteroscedasticity in returns from portfolios long in small firms and short in large firms listed on the New York Stock Exchange reduces the estimate of market risk and increases the estimated abnormal return. Greatly improved diagnostic test statistics are obtained, strengthening the evidence for the existence of positive average abnormal returns from small firms. Periodicity of order 6 and 12 months is identified. The estimation procedure operates by exploiting the autoregressive pattern of heteroscedasticity in the return data.  相似文献   

9.
This paper is concerned with testing and dating structural breaks in the dependence structure of multivariate time series. We consider a cumulative sum (CUSUM) type test for constant copula-based dependence measures, such as Spearman''s rank correlation and quantile dependencies. The asymptotic null distribution is not known in closed form and critical values are estimated by an i.i.d. bootstrap procedure. We analyze size and power properties in a simulation study under different dependence measure settings, such as skewed and fat-tailed distributions. To date breakpoints and to decide whether two estimated break locations belong to the same break event, we propose a pivot confidence interval procedure. Finally, we apply the test to the historical data of 10 large financial firms during the last financial crisis from 2002 to mid-2013.  相似文献   

10.
Quantile regression is a technique to estimate conditional quantile curves. It provides a comprehensive picture of a response contingent on explanatory variables. In a flexible modeling framework, a specific form of the conditional quantile curve is not a priori fixed. This motivates a local parametric rather than a global fixed model fitting approach. A nonparametric smoothing estimator of the conditional quantile curve requires to balance between local curvature and stochastic variability. In this paper, we suggest a local model selection technique that provides an adaptive estimator of the conditional quantile regression curve at each design point. Theoretical results claim that the proposed adaptive procedure performs as good as an oracle which would minimize the local estimation risk for the problem at hand. We illustrate the performance of the procedure by an extensive simulation study and consider a couple of applications: to tail dependence analysis for the Hong Kong stock market and to analysis of the distributions of the risk factors of temperature dynamics.  相似文献   

11.
Summary. We investigate the operating characteristics of the Benjamini–Hochberg false discovery rate procedure for multiple testing. This is a distribution-free method that controls the expected fraction of falsely rejected null hypotheses among those rejected. The paper provides a framework for understanding more about this procedure. We first study the asymptotic properties of the `deciding point' D that determines the critical p -value. From this, we obtain explicit asymptotic expressions for a particular risk function. We introduce the dual notion of false non-rejections and we consider a risk function that combines the false discovery rate and false non-rejections. We also consider the optimal procedure with respect to a measure of conditional risk.  相似文献   

12.
Real world data often fail to meet the underlying assumption of population normality. The Rank Transformation (RT) procedure has been recommended as an alternative to the parametric factorial analysis of covariance (ANCOVA). The purpose of this study was to compare the Type I error and power properties of the RT ANCOVA to the parametric procedure in the context of a completely randomized balanced 3 × 4 factorial layout with one covariate. This study was concerned with tests of homogeneity of regression coefficients and interaction under conditional (non)normality. Both procedures displayed erratic Type I error rates for the test of homogeneity of regression coefficients under conditional nonnormality. With all parametric assumptions valid, the simulation results demonstrated that the RT ANCOVA failed as a test for either homogeneity of regression coefficients or interaction due to severe Type I error inflation. The error inflation was most severe when departures from conditional normality were extreme. Also associated with the RT procedure was a loss of power. It is recommended that the RT procedure not be used as an alternative to factorial ANCOVA despite its encouragement from SAS, IMSL, and other respected sources.  相似文献   

13.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   

14.
Sequential monitoring of efficacy and safety data has become a vital component of modern clinical trials. It affords companies the opportunity to stop studies early in cases when it appears as if the primary objective will not be achieved or when there is clear evidence that the primary objective has already been met. This paper introduces a new concept of the backward conditional hypothesis test (BCHT) to evaluate clinical trial success. Unlike the regular conditional power approach that relies on the probability that the final study result will be statistically significant based on the current interim look, the BCHT was constructed based on the hypothesis test framework. The framework comprises a significant test level as opposed to the arbitrary fixed futility index utilized in the conditional power method. Additionally, the BCHT has proven to be a uniformly most powerful test. Noteworthy features of the BCHT method compared with the conditional power method will be presented. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
This paper considers nonlinear regression models when neither the response variable nor the covariates can be directly observed, but are measured with both multiplicative and additive distortion measurement errors. We propose conditional variance and conditional mean calibration estimation methods for the unobserved variables, then a nonlinear least squares estimator is proposed. For the hypothesis testing of parameter, a restricted estimator under the null hypothesis and a test statistic are proposed. The asymptotic properties for the estimator and test statistic are established. Lastly, a residual-based empirical process test statistic marked by proper functions of the regressors is proposed for the model checking problem. We further suggest a bootstrap procedure to calculate critical values. Simulation studies demonstrate the performance of the proposed procedure and a real example is analysed to illustrate its practical usage.  相似文献   

16.
We introduce a new multivariate GARCH model with multivariate thresholds in conditional correlations and develop a two-step estimation procedure that is feasible in large dimensional applications. Optimal threshold functions are estimated endogenously from the data and the model conditional covariance matrix is ensured to be positive definite. We study the empirical performance of our model in two applications using U.S. stock and bond market data. In both applications our model has, in terms of statistical and economic significance, higher forecasting power than several other multivariate GARCH models for conditional correlations.  相似文献   

17.
We introduce a two-step procedure, in the context of ultra-high dimensional additive models, which aims to reduce the size of covariates vector and distinguish linear and nonlinear effects among nonzero components. Our proposed screening procedure, in the first step, is constructed based on the concept of cumulative distribution function and conditional expectation of response in the framework of marginal correlation. B-splines and empirical distribution functions are used to estimate the two above measures. The sure screening property of this procedure is also established. In the second step, a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. The performance of the designed method is examined by several test functions to show its capabilities against competitor methods when the distribution of errors is varied. Simulation studies imply that the proposed screening procedure can be applied to the ultra-high dimensional data and well detect the influential covariates. It also demonstrate the superiority in comparison with the existing methods. This method is also applied to identify most influential genes for overexpression of a G protein-coupled receptor in mice.  相似文献   

18.
Summary.  The analysis of covariance is a technique that is used to improve the power of a k -sample test by adjusting for concomitant variables. If the end point is the time of survival, and some observations are right censored, the score statistic from the Cox proportional hazards model is the method that is most commonly used to test the equality of conditional hazard functions. In many situations, however, the proportional hazards model assumptions are not satisfied. Specifically, the relative risk function is not time invariant or represented as a log-linear function of the covariates. We propose an asymptotically valid k -sample test statistic to compare conditional hazard functions which does not require the assumption of proportional hazards, a parametric specification of the relative risk function or randomization of group assignment. Simulation results indicate that the performance of this statistic is satisfactory. The methodology is demonstrated on a data set in prostate cancer.  相似文献   

19.
When measurement error is present in covariates, it is well known that naïvely fitting a generalized linear model results in inconsistent inferences. Several methods have been proposed to adjust for measurement error without making undue distributional assumptions about the unobserved true covariates. Stefanski and Carroll focused on an unbiased estimating function rather than a likelihood approach. Their estimating function, known as the conditional score, exists for logistic regression models but has two problems: a poorly behaved Wald test and multiple solutions. They suggested a heuristic procedure to identify the best solution that works well in practice but has little theoretical support compared with maximum likelihood estimation. To help to resolve these problems, we propose a conditional quasi-likelihood to accompany the conditional score that provides an alternative to Wald's test and successfully identifies the consistent solution in large samples.  相似文献   

20.
This paper proposes a consistent parametric test of Granger-causality in quantiles. Although the concept of Granger-causality is defined in terms of the conditional distribution, most articles have tested Granger-causality using conditional mean regression models in which the causal relations are linear. Rather than focusing on a single part of the conditional distribution, we develop a test that evaluates nonlinear causalities and possible causal relations in all conditional quantiles, which provides a sufficient condition for Granger-causality when all quantiles are considered. The proposed test statistic has correct asymptotic size, is consistent against fixed alternatives, and has power against Pitman deviations from the null hypothesis. As the proposed test statistic is asymptotically nonpivotal, we tabulate critical values via a subsampling approach. We present Monte Carlo evidence and an application considering the causal relation between the gold price, the USD/GBP exchange rate, and the oil price.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号