共查询到20条相似文献,搜索用时 46 毫秒
1.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
2.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
3.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
4.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
5.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
6.
In this article, we obtained a dependence measure for generalized Farlie-Gumbel-Morgenstern (FGM) family in view of Kochar and Gupta (1987) and then compared this measure with Spearman's rho and Kendall's tau in FGM family. Moreover, we evaluated the empirical power of the class of distribution-free tests proposed by Kochar and Gupta (1987, 1990) based on exact distribution of a U-statistics. This is derived via a simulation study for sample of sizes n = 6, 8, 10, 12, 16, and 20. Also, we compared our simulation results with those achieved by Amini et al. (2010) and Güven and Kotz (2008). 相似文献
7.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
8.
We consider a new generalization of the skew-normal distribution introduced by Azzalini (1985). We denote this distribution Beta skew-normal (BSN) since it is a special case of the Beta generated distribution (Jones, 2004). Some properties of the BSN are studied. We pay attention to some generalizations of the skew-normal distribution (Bahrami et al., 2009; Sharafi and Behboodian, 2008; Yadegari et al., 2008) and to their relations with the BSN. 相似文献
9.
AbstractWhen the mixed chart proposed by Aslam et al. (2015) is in use, the sample items are classified as defective or not defective and, depending on the number of defectives, the quality characteristic X of the sample items are also measured. In this case, an Xbar chart decides the state of the process. The previous conforming/non-conforming classification truncates the X distribution and, because of that, the mathematical development to obtain the ARLs is complex. Aslam et al. (2015) didn’t pay attention to the fact that the X distribution is truncated and, due to that, they obtained incorrect ARLs. 相似文献
10.
Extending the bifurcating autoregressive (BAR) process (cf. Cowan and Staudte, 1986) to multi-casting (multi-splitting) data, Hwang and Choi (2009) introduced multi-casting autoregression (MCAR, for short) defined on multi-casting tree structured data. This article is concerned with the case when the MCAR model is partially specified only through conditional mean and variance without directly imposing autoregressive (AR) structure. The resulting class of models will be referred to as P-MCAR (partially specified MCAR). The P-MCAR considerably enlarges the class of multi-casting models including (as special cases) MCAR, random coefficient MCAR, conditionally heteroscedastic multi-casting models and binomial-thinning processes. Moment structures for this broad P-MCAR class are investigated. Least squares (LS) estimation method is discussed and asymptotic relative efficiency (ARE) of the generalized-LS over ordinary-LS is obtained in a closed form. A simulation study is conducted to illustrate results. 相似文献
11.
《统计学通讯:理论与方法》2013,42(7):1533-1541
ABSTRACT The systematic sampling (SYS) design (Madow and Madow, 1944) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982) and usual variance estimators (Yates and Grundy, 1953) are inadequate and can overestimate the variance significantly (Särndal et al., 1992). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study. 相似文献
12.
Accelerated failure time models are useful in survival data analysis, but such models have received little attention in the context of measurement error. In this paper we discuss an accelerated failure time model for bivariate survival data with covariates subject to measurement error. In particular, methods based on the marginal and joint models are considered. Consistency and efficiency of the resultant estimators are investigated. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring the measurement error of covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study (Knuiman et al., 1994). 相似文献
13.
Hafiz M. R. Khan 《统计学通讯:理论与方法》2013,42(24):4427-4438
The purpose of this article is to investigate the predictive inference for responses from the location parameter mean as well as from the median given a doubly censored sample from the two-parameter Rayleigh model. The predictive results by Khan et al. (2010) are used to obtain the predictive inference for responses from the median, where Khan et al. (2010) obtained the future estimates from the mean. A numerical example representing 66 liver cancer patients is used for predictive analysis. It is concluded that the predictive inference from the median gives precise results as compared with the location parameter mean. 相似文献
14.
In an earlier article (Bai et al., 1999), the problem of simultaneous estimation of the number of signals and frequencies of multiple sinusoids is considered in the case that some observations are missing. The number of signals is estimated with an information theoretic criterion and the frequencies are estimated with eigenvariation linear prediction. Asymptotic properties of the procedure are investigated but the Monte Carlo simulation is not performed. In this article, a slightly different but scale invariant criterion for detection is proposed and the estimation of frequencies remains the same. Asymptotic properties of this new procedure are provided. Monte Carlo Simulation for both procedures is carried out. Furthermore, comparison on the real signals is also given. 相似文献
15.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(18):3222-3237
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000, 2002). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000, 2002) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function. Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients. 相似文献
16.
Formulae for the first and second order inclusion probabilities for the Rao et al. (1962) (RHC) scheme of sampling are derived. They enable one to evaluate, for a sample drawn according to the RHC scheme, the Horvitz and Thompson's (1952) estimator (HTE) along with its unbiased variance estimator given by Yates and Grundy (1953). So, for a sample at hand thus drawn one may choose between the RHCE and the HTE for use on finding which one has the smaller coefficient of variation. 相似文献
17.
Chung-Ho Chen 《统计学通讯:理论与方法》2013,42(10):1767-1778
Economic selection of process parameters has been an important topic in modern statistical process control. The optimum process parameters setting have a major effect on the expected profit/cost per item. There are some concerns on the problem of setting process parameters. Boucher and Jafari (1991) first considered the attribute single sampling plan applied in the selection of process target. Pulak and Al-Sultan (1996) extended Boucher and Jafari's model and presented the rectifying inspection plan for determining the optimum process mean. In this article, we further propose a modified Pulak and Al-Sultan model for determining the optimum process mean and standard deviation under the rectifying inspection plan with the average outgoing quality limit (AOQL) protection. Taguchi's (1986) symmetric quadratic quality loss function is adopted for evaluating the product quality. By solving the modified model, we can obtain the optimum process parameters with the maximum expected profit per item and the specified quality level can be reached. 相似文献
18.
For the first time, we provide a matrix formula for second-order covariances of maximum likelihood estimates in heteroskedastic generalized linear models, thus generalizing the results of Cordeiro (2004) and Cordeiro et al. (2006) related to the generalized linear models with known and unknown dispersion parameter, respectively. The covariance matrix formula does not involve cumulants of log-likelihood derivatives and can be easily obtained using simple matrix operations. We apply our main result to a simple model. Some simulations show that the second-order covariances can be quite pronounced in small to moderate samples. The usual covariances of the maximum likelihood estimates can be corrected by these second-order covariances. 相似文献
19.
Gadre and Rattihalli [5] have introduced the Modified Group Runs (MGR) control chart to identify the increases in fraction non-conforming and to detect shifts in the process mean. The MGR chart reduces the out-of-control average time-to-signal (ATS), as compared with most of the well-known control charts. In this article, we develop the Side Sensitive Modified Group Runs (SSMGR) chart to detect shifts in the process mean. With the help of numerical examples, it is illustrated that the SSMGR chart performs better than the Shewhart's X¯ chart, the synthetic chart [12], the Group Runs chart [4], the Side Sensitive Group Runs chart [6], as well as the MGR chart [5]. In some situations it is also superior to the Cumulative Sum chart p9] and the exponentially weighed moving average chart [10]. In the steady state also, its performance is better than the above charts. 相似文献
20.
Zheng Su 《统计学通讯:模拟与计算》2013,42(8):1163-1170
Johns (1988), Davison (1988), and Do and Hall (1991) used importance sampling for calculating bootstrap distributions of one-dimensional statistics. Realizing that their methods can not be extended easily to multi-dimensional statistics, Fuh and Hu (2004) proposed an exponential tilting formula for statistics of multi-dimension, which is optimal in the sense that the asymptotic variance is minimized for estimating tail probabilities of asymptotically normal statistics. For one-dimensional statistics, Hu and Su (2008) proposed a multi-step variance minimization approach that can be viewed as a generalization of the two-step variance minimization approach proposed by Do and Hall (1991). In this article, we generalize the approach of Hu and Su (2008) to multi-dimensional statistics, which applies to general statistics and does not resort to asymptotics. Empirical results on a real survival data set show that the proposed algorithm provides significant computational efficiency gains. 相似文献