共查询到20条相似文献,搜索用时 875 毫秒
1.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
2.
Feng Hu 《统计学通讯:理论与方法》2017,46(7):3586-3598
In this paper, our aim is to obtain the modulus of continuity theorem for G-Brownian motion. It turns out that our theorem is a natural extension of the classical result obtained by Lévy (1937). 相似文献
3.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
4.
Josemar Rodrigues 《统计学通讯:理论与方法》2013,42(18):2943-2952
In this article, we obtain a mixture representation of the maximum entropy density introduced by Rodrigues (2004) via Laplace approximation. This representation suggests, as in Sklar (1959), a dependence structure through Archimedean copulas independently of the specified marginal distributions. This result can be used as a natural Bayesian and non Bayesian procedure to estimate the dependence function and the marginal, separately. 相似文献
5.
This article proposes a marginalized model for repeated or otherwise hierarchical, overdispersed time-to-event outcomes, adapting the so-called combined model for time-to-event outcomes of Molenberghs et al. (in press), who combined gamma and normal random effects. The two sets of random effects are used to accommodate simultaneously correlation between repeated measures and overdispersion. The proposed version allows for a direct marginal interpretation of all model parameters. The outcomes are allowed to be censored. Two estimation methods are proposed: full likelihood and pairwise likelihood. The proposed model is applied to data from a so-called comet assay and to data from recurrent asthma attacks in children. Both estimation methods perform very well. From simulation results, it follows that the marginalized combined model behaves similarly to the ordinary combined model in terms of point estimation and precision. It is also observed that the pairwise likelihood required more computation time on the one hand but is less sensitive to starting values and stabler in terms of bias with increasing sample size and censoring percentage than full likelihood, on the other, leaving room for both in practice. 相似文献
6.
This article extends the correlation methodology developed by Chinchilli et al. (2005) for the 2 × 2 crossover design to more complex crossover designs for clinical trials. We describe how the methodology can be adapted to a general type of two-treatment crossover design which includes either at least two sequences or at least two treatment periods or both. We then derive the asymptotic theory for the corresponding correlation statistics, investigate the statistical accuracy of the estimators via bootstrap analyses, and demonstrate their use with two real data examples. 相似文献
7.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010). The estimators proposed here also have fast convergence rates and are simple to implement. 相似文献
8.
Nadarajah and Gupta (2004) introduced the beta Fréchet (BF) distribution, which is a generalization of the exponentiated Fréchet (EF) and Fréchet distributions, and obtained the probability density and cumulative distribution functions. However, they did not investigate the moments and the order statistics. In this article, the BF density function and the density function of the order statistics are expressed as linear combinations of Fréchet density functions. This is important to obtain some mathematical properties of the BF distribution in terms of the corresponding properties of the Fréchet distribution. We derive explicit expansions for the ordinary moments and L-moments and obtain the order statistics and their moments. We also discuss maximum likelihood estimation and calculate the information matrix which was not given in the literature. The information matrix is numerically determined. The usefulness of the BF distribution is illustrated through two applications to real data sets. 相似文献
9.
10.
In this work, we propose the construction of a chi-squared goodness-of-fit test in censored data case, for Bertholon model which can analyse various competing risks of failure or death. This test is based on a modification of the Nikulin-Rao-Robson (NRR) statistic proposed by Bagdonavicius and Nikulin (2011a, 2011b) for censored data. We applied this test to numerical examples from simulated samples and real data. 相似文献
11.
Tony Vangeneugden Geert Molenberghs Geert Verbeke Clarice G.B. Demétrio 《统计学通讯:理论与方法》2014,43(19):4164-4178
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived. 相似文献
12.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献
13.
Robert M. Adams 《统计学通讯:理论与方法》2013,42(13):2425-2442
This article generalizes results from Park et al. (1998) and Adams et al. (1999) on semiparametric efficient estimation of panel models. The form of semiparametric efficient estimators depends on the statistical assumptions imposed. Normality assumptions on the transitory error are sometimes inappropriate. We relax the normality assumption used in the articles above to derive more general semiparametric efficient estimators. These estimators are illustrated in a Monte Carlo simulation and an analysis of banking productivity. 相似文献
14.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
15.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
16.
《统计学通讯:理论与方法》2013,42(12):2415-2440
Abstract In this article, nonparametric estimators of the regression function, and its derivatives, obtained by means of weighted local polynomial fitting are studied. Consider the fixed regression model where the error random variables are coming from a stationary stochastic process satisfying a mixing condition. Uniform strong consistency, along with rates, are established for these estimators. Furthermore, when the errors follow an AR(1) correlation structure, strong consistency properties are also derived for a modified version of the local polynomial estimators proposed by Vilar-Fernández and Francisco-Fernández (Vilar-Fernández, J. M., Francisco-Fernández, M. (2002). Local polynomial regression smoothers with AR-error structure. TEST 11(2):439–464). 相似文献
17.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low. 相似文献
18.
In this article, another version of the generalized exponential geometric distribution different to that of Silva et al. (2010) is proposed. This new three-parameter lifetime distribution with decreasing, increasing, and bathtub failure rate function is created by compounding the generalized exponential distribution of Gupta and Kundu (1999) with a geometric distribution. Some basic distributional properties, moment-generating function, rth moment, and Rényi entropy of the new distribution are studied. The model parameters are estimated by the maximum likelihood method and the asymptotic distribution of estimators is discussed. Finally, an application of the new distribution is illustrated using the two real data sets. 相似文献
19.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
20.
The power-law process (PLP) is a two-parameter model widely used for modeling repairable system reliability. Results on exact point estimation for both parameters as well as exact interval estimation for the shape parameter are well known. In this paper, we investigate the interval estimation for the scale parameter. Asymptotic confidence intervals are derived using Fisher information matrix and theoretical results by Cocozza-Thivent (1997). The accuracy of the interval estimation for finite samples is studied by simulation methods. 相似文献