共查询到20条相似文献,搜索用时 31 毫秒
1.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
2.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
3.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
4.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献
5.
Based on the semiparametric median regression analysis for the right-censored data developed by Ying et al. (1995), an empirical likelihood based inferential procedure for the regression coefficients is proposed. The limiting distribution of the proposed log-empirical likelihood ratio test statistic follows a chi-squared distribution, which corresponds to the standard asymptotic results of the empirical likelihood method. The inference about the subsets of the entire regression coefficients vector is discussed. The proposed method is illustrated by some simulation studies. 相似文献
6.
In this article, we introduce shared gamma frailty models with three different baseline distributions namely, Weibull, generalized exponential and exponential power distributions. We develop Bayesian estimation procedure using Markov Chain Monte Carlo(MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply these three models to a real life bivariate survival dataset of McGilchrist and Aisbett (1991) related to kidney infection data and a better model is suggested for the data. 相似文献
7.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
8.
Zheng Su 《统计学通讯:模拟与计算》2013,42(8):1163-1170
Johns (1988), Davison (1988), and Do and Hall (1991) used importance sampling for calculating bootstrap distributions of one-dimensional statistics. Realizing that their methods can not be extended easily to multi-dimensional statistics, Fuh and Hu (2004) proposed an exponential tilting formula for statistics of multi-dimension, which is optimal in the sense that the asymptotic variance is minimized for estimating tail probabilities of asymptotically normal statistics. For one-dimensional statistics, Hu and Su (2008) proposed a multi-step variance minimization approach that can be viewed as a generalization of the two-step variance minimization approach proposed by Do and Hall (1991). In this article, we generalize the approach of Hu and Su (2008) to multi-dimensional statistics, which applies to general statistics and does not resort to asymptotics. Empirical results on a real survival data set show that the proposed algorithm provides significant computational efficiency gains. 相似文献
9.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
10.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
11.
Hafiz M. R. Khan 《统计学通讯:理论与方法》2013,42(24):4427-4438
The purpose of this article is to investigate the predictive inference for responses from the location parameter mean as well as from the median given a doubly censored sample from the two-parameter Rayleigh model. The predictive results by Khan et al. (2010) are used to obtain the predictive inference for responses from the median, where Khan et al. (2010) obtained the future estimates from the mean. A numerical example representing 66 liver cancer patients is used for predictive analysis. It is concluded that the predictive inference from the median gives precise results as compared with the location parameter mean. 相似文献
12.
Pao-sheng Shen 《统计学通讯:模拟与计算》2013,42(4):531-543
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002). It is demonstrated that both the approach of Lu and Liang (2006) and that of Yu et al. (2011) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods. 相似文献
13.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(10):2295-2307
Cai and Zeng (2011) proposed an additive mixed effect model to analyze clustered right-censored data. In this article, we demonstrate that the approach of Cai and Zeng (2011) can be extended to clustered doubly censored data. Furthermore, when both left- and right-censoring variables are always observed, we propose alternative estimators using the approach of Cai and Cheng (2004). A simulation study is conducted to investigate the performance of the proposed estimators. 相似文献
14.
Olivier Darné 《统计学通讯:模拟与计算》2013,42(5):1037-1050
Unit root tests with structural break developed by Zivot and Andrews (1992) and Perron and Rodriguez (2003) in the presence of additive outliers and breaks are studied by simulation experiments. The results show that the Zivot–Andrews test appears to have size distortions due to the additive outliers whereas the Perron–Rodriguez test exhibits good properties of size and power. However, the two tests are biased when a second break is present but not taken into account. Furthermore, these endogenous break unit root tests tend to determine the break point incorrectly at one period behind the true break point, leading to spurious rejections of the unit root null hypothesis. 相似文献
15.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(18):3222-3237
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000, 2002). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000, 2002) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function. Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients. 相似文献
16.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
17.
We propose a class of estimators for the population mean when there are missing data in the data set. Obtaining the mean square error equations of the proposed estimators, we show the conditions where the proposed estimators are more efficient than the sample mean, ratio-type estimators, and the estimators in Singh and Horn (2000) and Singh and Deo (2003) in the case of missing data. These conditions are also supported by a numerical example. 相似文献
18.
AbstractWhen the mixed chart proposed by Aslam et al. (2015) is in use, the sample items are classified as defective or not defective and, depending on the number of defectives, the quality characteristic X of the sample items are also measured. In this case, an Xbar chart decides the state of the process. The previous conforming/non-conforming classification truncates the X distribution and, because of that, the mathematical development to obtain the ARLs is complex. Aslam et al. (2015) didn’t pay attention to the fact that the X distribution is truncated and, due to that, they obtained incorrect ARLs. 相似文献
19.
Oluseun Odumade 《统计学通讯:模拟与计算》2013,42(3):473-502
In this article, two new improved randomized response models have been proposed. The proposed models are found to be more efficient than the recent randomized response model studied by Bar-Lev et al. (2004). The relative efficiency of the proposed models has been studied with respect to the Bar-Lev et al. (2004) model under different situations. 相似文献
20.
Grzegorz Wyłupek 《统计学通讯:理论与方法》2013,42(7):1406-1427
This article proposes a new nonparametric test for the ordered alternatives problem in the k-sample setting for null hypothesis of lack of trend. This article further elaborates upon and extends the results of Ledwina and Wy?upek (2012a) obtained for k = 2. Simulations show that the new test has high and stable power and is able to control the Type I error to satisfactory extent, thus solving the problem posed in Terpstra and Magel (2003). Our theoretical results say that asymptotic errors of both kinds do not exceed significance level, thus implying that the test is asymptotically unbiased. 相似文献