共查询到20条相似文献,搜索用时 93 毫秒
1.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
2.
This paper deals with Dynamic Stochastic General Equilibrium (DSGE) models under a multivariate student-t distribution for the structural shocks. Based on the solution algorithm of Klein (2000) and the gamma-normal representation of the t-distribution, the TaRB-MH algorithm of Chib and Ramamurthy (2010) is used to estimate the model. A technique for estimating the marginal likelihood of the DSGE student-t model is also provided. The methodologies are illustrated first with simulated data and then with the DSGE model of Ireland (2004) where the results support the t-error model in relation to the Gaussian model. 相似文献
3.
Hong Zhang 《统计学通讯:理论与方法》2013,42(7):1228-1241
Sa and Edwards (1993) first proposed the Multiple Comparisons with a Control problem in Response Surface Methodology. They provided an exact solution for one predictor variable and a conservative solution when number of predictor variables is more than one. Merchant et al. (1998) improved the solution for the latter case. This article improves Merchant et al.'s solution for the case of rotatable designs in two predictor variables. 相似文献
4.
Pao-sheng Shen 《统计学通讯:模拟与计算》2013,42(4):531-543
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002). It is demonstrated that both the approach of Lu and Liang (2006) and that of Yu et al. (2011) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods. 相似文献
5.
Huang (2010) proposed an optional randomized response model using a linear combination scrambling which is a generalization of the multiplicative scrambling of Eichhorn and Hayre (1983) and the additive scrambling of Gupta et al. (2006, 2010). In this article, we discuss two main issues. (1) Can the Huang (2010) model be improved further by using a two-stage approach?; (2) Does the linear combination scrambling provide any benefit over the additive scrambling of Gupta et al. (2010)? We will note that the answer to the first question is “yes” but the answer to the second question is “no.” 相似文献
6.
There have been many alternative strategies for implementing sampling survey on quantitative characteristic of sensitive issues by using randomized response (RR) technique. The efficiency of most of those strategies has been improved by choosing the suitable design parameters of model. However, the two different procedures with pre-assigned design parameter values cannot ensure that they possess the same protection degree to the respondents. Some earlier comparisons of those strategies are inadequate (as in Eichhorn and Hayre, 1983; Gupta et al., 2002). Some literature contains a more comprehensive comparison based on efficiency and protection degree to the respondents among the qualitative characteristic RR techniques (see Bhargava and Singh, 2002; Nayak, 1994; Zaizai and Zankan, 2004). As far as the comparisons are concerned that are based on efficiency and protection degree to the respondents among the quantitative characteristic RR techniques, very few related studies have been found so far. The purpose of this article is to give a more adequate comparison among those earlier quantitative characteristic RR strategies. It is found that several important differences between the results obtained in this article and some known results exist. Therefore, these earlier RR strategies should be reevaluated. 相似文献
7.
Shaowen Wu 《统计学通讯:模拟与计算》2013,42(8):1590-1604
We reinvestigate the empirical problem of lag length selection in unit root tests when using the augmented Dickey–Fuller test based on GLS-detrending. We extend the Ng and Perron (1995) work on this issue by applying the finite sample critical values calculated using the formulae proposed by Cheung and Lai (1995). Unlike Ng and Perron (2001) we find through simulation studies that the method of selecting lag length using the sequential t-test in the ADF regression of GLS-detrended series performs the best in most cases. 相似文献
8.
We propose a Bayesian approach for inference in a dynamic disequilibrium model. To circumvent the difficulties raised by the Maddala and Nelson (1974) specification in the dynamic case, we analyze a dynamic extended version of the disequilibrium model of Ginsburgh et al. (1980). We develop a Gibbs sampler based on the simulation of the missing observations. The feasibility of the approach is illustrated by an empirical analysis of the Polish credit market, for which we conduct a specification search using the posterior deviance criterion of Spiegelhalter et al. (2002). 相似文献
9.
ABSTRACTThis paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markov-switching means and variances. These tests are robust to the identification failures that plague conventional likelihood-based inference methods. The approach exploits the moments of normal mixtures implied by the regime-switching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison with the optimal tests for Markov-switching parameters of Carrasco et al. (2014), and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of USA output growth. 相似文献
10.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
11.
In this article, we find designs insensitive to the presence of an outlier in a diallel cross design setup for estimating a complete set of orthonormal contrasts among the effects of the general combining abilities of a set of parental lines. The criterion of robustness, suggested by Mandal (1989) in block design setup and used by Biswas (2012) in treatment-control setup, is adapted here. Complete diallel cross designs, suggested by Gupta and Kageyama (1994), and partial diallel cross designs, suggested by Gupta et al. (1995) and Mukerjee (1997), are found to be robust under certain conditions. 相似文献
12.
Recently, several authors have been concerned with ordering comparison of known distributions of the family of generalized power series (GPS) distributions with their mixtures in various senses. In this article, we shall employ a unified approach and obtain similar results, more generally, for all members of the class of the GPS distributions. Some of the previous findings of Misra et al. (2003), Alamatsaz and Abbasi (2008), and Aghababaei Jazi and Alamatsaz (2010) in this connection, then, follow as corollaries. Further, we have derived some more ordering comparison results. 相似文献
13.
In an earlier article (Bai et al., 1999), the problem of simultaneous estimation of the number of signals and frequencies of multiple sinusoids is considered in the case that some observations are missing. The number of signals is estimated with an information theoretic criterion and the frequencies are estimated with eigenvariation linear prediction. Asymptotic properties of the procedure are investigated but the Monte Carlo simulation is not performed. In this article, a slightly different but scale invariant criterion for detection is proposed and the estimation of frequencies remains the same. Asymptotic properties of this new procedure are provided. Monte Carlo Simulation for both procedures is carried out. Furthermore, comparison on the real signals is also given. 相似文献
14.
The traditional exponentially weighted moving average (EWMA) chart is one of the most popular control charts used in practice today. The in-control robustness is the key to the proper design and implementation of any control chart, lack of which can render its out-of-control shift detection capability almost meaningless. To this end, Borror et al. [5] studied the performance of the traditional EWMA chart for the mean for i.i.d. data. We use a more extensive simulation study to further investigate the in-control robustness (to non-normality) of the three different EWMA designs studied by Borror et al. [5]. Our study includes a much wider collection of non-normal distributions including light- and heavy-tailed and symmetric and asymmetric bi-modal as well as the contaminated normal, which is particularly useful to study the effects of outliers. Also, we consider two separate cases: (i) when the process mean and standard deviation are both known and (ii) when they are both unknown and estimated from an in-control Phase I sample. In addition, unlike in the study done by Borror et al. [5], the average run-length (ARL) is not used as the sole performance measure in our study, we consider the standard deviation of the run-length (SDRL), the median run-length (MDRL), and the first and the third quartiles as well as the first and the 99th percentiles of the in-control run-length distribution for a better overall assessment of the traditional EWMA chart's in-control performance. Our findings sound a cautionary note to the (over) use of the EWMA chart in practice, at least with some types of non-normal data. A summary and recommendations are provided. 相似文献
15.
Accelerated failure time models are useful in survival data analysis, but such models have received little attention in the context of measurement error. In this paper we discuss an accelerated failure time model for bivariate survival data with covariates subject to measurement error. In particular, methods based on the marginal and joint models are considered. Consistency and efficiency of the resultant estimators are investigated. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring the measurement error of covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study (Knuiman et al., 1994). 相似文献
16.
Several methods have been devised to deal with the problem of temporal disaggregation of economic time series (a) either when related series are available or (b) when only aggregate figures exist. In this article, we propose a statistical model-based approach to temporal disaggregation of economic time series by related series. The proposed approach is performed in two stages. In the first stage, we evaluate a preliminary estimate of the disaggregated series using a regression model for the disaggregated series and related series observed in the same frequency. The preliminary estimate of disaggregated series obtained in the first step is not consistent with aggregate figures. To ensure consistency we propose in the second stage, the use of a modified benchmarking approach based on signal extraction (Hillmer and Trabelsi, 1987; Trabelsi and Hillmer, 1990) to adjust the preliminary estimate of disaggregate series. The approach developed here is used for Seasonally Adjusted (SA) and Not Seasonally Adjusted (NSA) data. A comparison with previous temporal disaggregation methods has been done. 相似文献
17.
18.
Housila P. Singh 《统计学通讯:理论与方法》2013,42(15):2718-2730
This article addresses the problem of estimating of finite population variance using auxiliary information in simple random sampling. A ratio-cum-difference type class of estimators for population variance has been suggested with its properties under large sample approximation. It has been shown that the suggested class of estimators is more efficient than usual unbiased, difference, Das and Tripathi (1978), Isaki (1983), Singh et al. (1988), Kadilar and Cingi (2006), and other estimators/classes of estimators. In addition, we support this theoretical result with the aid of a empirical study. 相似文献
19.
Two types of estimates of process level, namely repeated median estimates (Siegel, 1982) and full online estimates (Gather et al., 2006) based on repeated median filters, are used to develop control charts. The distributional properties of the estimates are studied using simulation and these are found to closely follow normal distribution. The repeated median being robust against outliers with asymptotically 50% breakdown value and having small standard deviation is found to be useful as a basis for monitoring process averages. The control charts using repeated median estimates have been recommended for general use. 相似文献
20.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献