共查询到20条相似文献,搜索用时 62 毫秒
1.
Bo Li 《Econometric Reviews》2013,32(6):632-657
We outline a general paradigm for constructing asymptotically distribution-free (ADF) goodness-of-fit tests, which can be regarded as a generalization of Khmaladze (1993). This is achieved by a nonorthogonal projection of a class of functions onto the ortho-complement of the extended tangent space (ETS) associated with the null hypothesis. In parallel with the work of Bickel et al. (2006), we obtain transformed empirical processes (TEP) which are the building blocks for constructing omnibus tests such as the usual Kolmogorov–Smirnov type tests and Crámer–von Mise type tests, as well as Portmanteau tests and directional tests. The critical values can be tabulated due to the ADF property. All the tests are capable of detecting local (Pitman) alternative at the root-n scale. We shall illustrate the framework in several examples, mostly in regression model specification testing. 相似文献
2.
Robert M. Adams 《统计学通讯:理论与方法》2013,42(13):2425-2442
This article generalizes results from Park et al. (1998) and Adams et al. (1999) on semiparametric efficient estimation of panel models. The form of semiparametric efficient estimators depends on the statistical assumptions imposed. Normality assumptions on the transitory error are sometimes inappropriate. We relax the normality assumption used in the articles above to derive more general semiparametric efficient estimators. These estimators are illustrated in a Monte Carlo simulation and an analysis of banking productivity. 相似文献
3.
Satoshi Hattori 《统计学通讯:理论与方法》2013,42(4):542-559
Counting process techniques have been successfully introduced to semiparametric inference of repeated measurements. Cheng and Wei (2000) proposed a simple inference procedure for the semiparametric proportional rate model, which reduces to relative risk regression models for binary data. While the baseline mean functions are completely unspecified, it still requires several assumptions for valid inference. In this article, a goodness-of-fit test for it is proposed based on cumulative residuals. Theoretical justification is provided and an illustration with a dataset from a clinical trial is given. Results of simulation studies to evaluate finite sample performance are also provided. 相似文献
4.
Let ν be a positive Borel measure on ?n and pFq(a1,…, ap; b1,…, bq; s) be a generalized hypergeometric series. We define a generalized hypergeometric measure, μp,q := pFq(a1,…, ap; b1,…, bq;ν), as a series of convolution powers of the measure ν, and we investigate classes of probability distributions which are expressible as such a measure. We show that the Kemp (1968) family of distributions is an example of μp,q in which ν is a Dirac measure on ?. For the case in which ν is a Dirac measure on ?n, we relate μp,q to the diagonal natural exponential families classified by Bar-Lev et al. (1994). For p < q, we show that certain measures μp,q can be expressed as the convolution of a sequence of independent multi-dimensional Bernoulli trials. For p = q, q + 1, we show that the measures μp,q are mixture measures with the Dufresne and Poisson-stopped-sum probability distributions as their mixing measures. 相似文献
5.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
6.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
7.
This study is mainly concerned with estimating a shift parameter in the two-sample location problem. The proposed Smoothed Mann–Whitney–Wilcoxon method smooths the empirical distribution functions of each sample by using convolution technique, and it replaces unknown distribution functions F(x) and G(x ? Δ0) with the new smoothed distribution functions F s (x) and G s (x ? Δ0), respectively. The unknown shift parameter Δ0 is estimated by solving the gradient function S n (Δ) with respect to an arbitrary variable Δ. The asymptotic properties of the new estimator are established under some conditions that are similar to the Generalized Wilcoxon procedure proposed by Anderson and Hettmansperger (1996). Some of these properties are asymptotic normality, asymptotic level confidence interval, and hypothesis testing for Δ0. Asymptotic relative efficiency of the proposed method with respect to the least squares, Generalized Wilcoxon and Hodges and Lehmann (1963) procedures are also calculated under the contaminated normal model. 相似文献
8.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
9.
This article extends the correlation methodology developed by Chinchilli et al. (2005) for the 2 × 2 crossover design to more complex crossover designs for clinical trials. We describe how the methodology can be adapted to a general type of two-treatment crossover design which includes either at least two sequences or at least two treatment periods or both. We then derive the asymptotic theory for the corresponding correlation statistics, investigate the statistical accuracy of the estimators via bootstrap analyses, and demonstrate their use with two real data examples. 相似文献
10.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
11.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
12.
In the literature, there were only a few reports on goodness-of-fit tests on logistic regression models specifically derived for case-control studies. In this article, we propose a goodness-of-fit test for logistic regression models in stratified case-control studies using an empirical likelihood approach. The proposed statistic is an alternative to the statistic G o , recently proposed by Arbigast and Lin (2005). Simulation results show that the proposed statistic is often slightly more powerful than G o , although their performances are always close to each other. Moreover, implementation of our method is easy since the usual stratified logistic regression procedures in many statistical softwares can be employed. Some asymptotic results and application of the proposed statistic to two real datasets are also presented. 相似文献
13.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
14.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
15.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
16.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
17.
《统计学通讯:模拟与计算》2013,42(3):489-511
We consider the semiparametric regression model introduced by [1]. The dependent variable y is linked to the index x′ β through an unknown link function. [1] and [2] present Slicing methods (the Sliced Inverse Regression methods SIR-I, SIR-II and SIRα) in order to estimate the direction of the unknown slope parameter β. These methods are computationally simple and fast but depend on the choice of an arbitrary slicing fixed by the user. When the sample size is small, the number and the position of slices have an influence on the estimated direction. In this paper, we suggest to use the corresponding Pooled Slicing methods: PSIR-I (proposed by [3]), PSIR-II and PSIRα. These methods combine the results from a number of slicings. We compare the sample behaviour of Slicing and Pooled Slicing methods on simulations. We also propose a practical choice of α in SIRα and PSIRα methods. 相似文献
18.
Christos T. Nakas 《统计学通讯:模拟与计算》2013,42(5):1053-1059
This article studies the performance of the one-sample goodness-of-fit test which is based on the length of the P–P-plot initially introduced in a similar context by Reschenhofer and Bomze (1991). The distributional properties of the length test are revised empirically via simulations. In the Monte Carlo power study that follows the length test is shown empirically to have high power under various alternatives considered relative to members of the Cramér–von Mises family of goodness-of-fit tests, and the Kolmogorov–Smirnov test. 相似文献
19.
Pao-Sheng Shen 《统计学通讯:理论与方法》2013,42(16):4812-4823
ABSTRACTGandy and Jensen (2005) proposed goodness-of-fit tests for Aalen's additive risk model. In this article, we demonstrate that the approach of Gandy and Jensen (2005) can be applied to left-truncated right-censored (LTRC) data and doubly censored data. A simulation study is conducted to investigate the performance of the proposed tests. The proposed tests are illustrated using heart transplant data. 相似文献
20.
Sharma (1977) and Aggarwal et al. (2006) considered non circular construction of first- and second-order balanced repeated measurements designs. Sharma et al. (2002) constructed circular first- and second-order balanced repeated measurements designs only for a class with parameters (v, p = 3n, n = v 2) and also showed its universal optimality. In this article, we consider circular construction of first- and second-order balanced repeated measurements designs and strongly balanced repeated measurements designs by using the method of cyclic shifts. Some new circular designs with parameters (v, p, n) for cases p = v, p < v and p > v are given. 相似文献