共查询到20条相似文献,搜索用时 62 毫秒
1.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
2.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
3.
We propose a Bayesian approach for inference in a dynamic disequilibrium model. To circumvent the difficulties raised by the Maddala and Nelson (1974) specification in the dynamic case, we analyze a dynamic extended version of the disequilibrium model of Ginsburgh et al. (1980). We develop a Gibbs sampler based on the simulation of the missing observations. The feasibility of the approach is illustrated by an empirical analysis of the Polish credit market, for which we conduct a specification search using the posterior deviance criterion of Spiegelhalter et al. (2002). 相似文献
4.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
5.
Fernanda B. Rizzato Roseli A. Leandro Clarice G.B. Demétrio 《Journal of applied statistics》2016,43(11):2085-2109
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17,18] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12]. 相似文献
6.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
7.
In this article, we obtained a dependence measure for generalized Farlie-Gumbel-Morgenstern (FGM) family in view of Kochar and Gupta (1987) and then compared this measure with Spearman's rho and Kendall's tau in FGM family. Moreover, we evaluated the empirical power of the class of distribution-free tests proposed by Kochar and Gupta (1987, 1990) based on exact distribution of a U-statistics. This is derived via a simulation study for sample of sizes n = 6, 8, 10, 12, 16, and 20. Also, we compared our simulation results with those achieved by Amini et al. (2010) and Güven and Kotz (2008). 相似文献
8.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
9.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
10.
《统计学通讯:理论与方法》2012,41(2):343-360
AbstractThis paper presents the robust Bayesian inference based on the γ-divergence which is the same divergence as “type 0 divergence” in Jones et al. (2001) on the basis of Windham (1995). It is known that the minimum γ-divergence estimator works well to estimate the probability density for heavily contaminated data, and to estimate the variance parameters. In this paper, we propose a robust posterior distribution against outliers based on the γ-divergence and show the asymptotic properties of the proposed estimator. We also discuss some robustness properties of the proposed estimator and illustrate its performances in some simulation studies. 相似文献
11.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
12.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
13.
This paper addresses a generalization of the bivariate Cauchy distribution discussed by Fang et al. (1990), derived from a trivariate normal distribution with a general correlation matrix. We obtain explicit expressions for the joint distribution function and joint density function, and show that they reduce in a special case to the corresponding expressions of Fang et al. (1990). Finally, we show that this generalized distribution is useful in determining the orthant probability of a bivariate skew-normal distribution of Azzalini and Dalla Valle (1996). 相似文献
14.
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set. 相似文献
15.
Arnold Zellner Tomohiro Ando Nalan Baştürk Herman K. van Dijk 《Econometric Reviews》2014,33(1-4):3-35
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly non-elliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors’ effect on the dependent variable, if the number of instruments is greater than m +r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known “Metropolis-Hastings within Gibbs” sampling in the sense that one ‘more difficult’ step is used within an ‘easier’ simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case. 相似文献
16.
Oluseun Odumade 《统计学通讯:模拟与计算》2013,42(3):473-502
In this article, two new improved randomized response models have been proposed. The proposed models are found to be more efficient than the recent randomized response model studied by Bar-Lev et al. (2004). The relative efficiency of the proposed models has been studied with respect to the Bar-Lev et al. (2004) model under different situations. 相似文献
17.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
18.
Pao-sheng Shen 《统计学通讯:模拟与计算》2013,42(4):531-543
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002). It is demonstrated that both the approach of Lu and Liang (2006) and that of Yu et al. (2011) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods. 相似文献
19.
AbstractWhen the mixed chart proposed by Aslam et al. (2015) is in use, the sample items are classified as defective or not defective and, depending on the number of defectives, the quality characteristic X of the sample items are also measured. In this case, an Xbar chart decides the state of the process. The previous conforming/non-conforming classification truncates the X distribution and, because of that, the mathematical development to obtain the ARLs is complex. Aslam et al. (2015) didn’t pay attention to the fact that the X distribution is truncated and, due to that, they obtained incorrect ARLs. 相似文献
20.
N. K. Mandal 《统计学通讯:理论与方法》2013,42(10):1565-1575
In a mixture experiment the measured response is assumed to depend only on the relative proportion of ingredients or components present in the mixture. Scheffe (1958, 1963) first systematically considered this problem and introduced different models and designs suitable in such situations. Optimum designs for the estimation of parameters of different mixture models are available in the literature. The problem of estimating the optimum proportion of mixture components is of great practical importance. Pal and Mandal (2006, 2007) attempted to find a solution to this problem by adopting a pseudo-Bayesian approach and using the trace criterion. Subsequently, Pal and Mandal (2008) solved the problem using minimax criterion. In this article, the deficiency criterion due to Chatterjee and Mandal (1981) has been used as a measure for comparing the performance of competing designs. 相似文献