首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A contaminated beta model $(1-\gamma) B(1,1) + \gamma B(\alpha,\beta)$ is often used to describe the distribution of $P$ ‐values arising from a microarray experiment. The authors propose and examine a different approach: namely, using a contaminated normal model $(1-\gamma) N(0,\sigma^2) + \gamma N(\mu,\sigma^2)$ to describe the distribution of $Z$ statistics or suitably transformed $T$ statistics. The authors then address whether a researcher who has $Z$ statistics should analyze them using the contaminated normal model or whether the $Z$ statistics should be converted to $P$ ‐values to be analyzed using the contaminated beta model. The authors also provide a decision‐theoretic perspective on the analysis of $Z$ statistics. The Canadian Journal of Statistics 38: 315–332; 2010 © 2010 Statistical Society of Canada  相似文献   

2.
Two questions of interest involving nonparametric multiple comparisons are considered. The first question concerns whether it is appropriate to use a multiple comparison procedure as a test of the equality of k treatments, and if it is, which procedure performs best as a test. Our results show that for smaller k values some multiple comparison procedures perform well as tests. The second question concerns whether a joint ranking or a separate ranking multiple comparison procedure performs better as a test and as a device for treatment separation. We find that the joint ranking procedure does slightly better as a test, but for treatment separation the answer depends on the situation.  相似文献   

3.
Abstract

Augmented mixed beta regression models are suitable choices for modeling continuous response variables on the closed interval [0, 1]. The random eeceeects in these models are typically assumed to be normally distributed, but this assumption is frequently violated in some applied studies. In this paper, an augmented mixed beta regression model with skew-normal independent distribution for random effects are used. Next, we adopt a Bayesian approach for parameter estimation using the MCMC algorithm. The methods are then evaluated using some intensive simulation studies. Finally, the proposed models have applied to analyze a dataset from an Iranian Labor Force Survey.  相似文献   

4.
Summary.  Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t -statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance–mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation–extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.  相似文献   

5.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

6.
This article advocates the following perspective: When confronting a scientific problem, the field of statistics enters by viewing the problem as one where the scientific answer could be calculated if some missing data, hypothetical or real, were available. Thus, statistical effort should be devoted to three steps:
1.  formulate the missing data that would allow this calculation,
2.  stochastically fill in these missing data, and
3.  do the calculations as if the filled-in data were available.
This presentation discusses: conceptual benefits, such as for causal inference using potential outcomes; computational benefits, such as afforded by using the EM algorithm and related data augmentation methods based on MCMC; and inferential benefits, such as valid interval estimation and assessment of assumptions based on multiple imputation. JEL classification  C10, C14, C15  相似文献   

7.
One of the fundamental issues in analyzing microarray data is to determine which genes are expressed and which ones are not for a given group of subjects. In datasets where many genes are expressed and many are not expressed (i.e., underexpressed), a bimodal distribution for the gene expression levels often results, where one mode of the distribution represents the expressed genes and the other mode represents the underexpressed genes. To model this bimodality, we propose a new class of mixture models that utilize a random threshold value for accommodating bimodality in the gene expression distribution. Theoretical properties of the proposed model are carefully examined. We use this new model to examine the problem of differential gene expression between two groups of subjects, develop prior distributions, and derive a new criterion for determining which genes are differentially expressed between the two groups. Prior elicitation is carried out using empirical Bayes methodology in order to estimate the threshold value as well as elicit the hyperparameters for the two component mixture model. The new gene selection criterion is demonstrated via several simulations to have excellent false positive rate and false negative rate properties. A gastric cancer dataset is used to motivate and illustrate the proposed methodology.  相似文献   

8.
Microarray studies are now common for human, agricultural plant and animal studies. False discovery rate (FDR) is widely used in the analysis of large-scale microarray data to account for problems associated with multiple testing. A well-designed microarray study should have adequate statistical power to detect the differentially expressed (DE) genes, while keeping the FDR acceptably low. In this paper, we used a mixture model of expression responses involving DE genes and non-DE genes to analyse theoretical FDR and power for simple scenarios where it is assumed that each gene has equal error variance and the gene effects are independent. A simulation study was used to evaluate the empirical FDR and power for more complex scenarios with unequal error variance and gene dependence. Based on this approach, we present a general guide for sample size requirement at the experimental design stage for prospective microarray studies. This paper presented an approach to explicitly connect the sample size with FDR and power. While the methods have been developed in the context of one-sample microarray studies, they are readily applicable to two-sample, and could be adapted to multiple-sample studies.  相似文献   

9.
The mixture maximum likelihood approach to clustering is used to allocate treatments from a randomized complete block de-sign into relatively homogeneous groups. The implementation of this approach is straightforward for fixed but not random block effects. The density function in each underlying group is assumed to be normal and clustering is performed on the basis of the estimated posterior probabilities of group membership. A test based on the log likelihood under the mixture model can be used to assess the actual number of groups present. The tech-nique is demonstrated by using it to cluster data from a random-ized complete block experiment.  相似文献   

10.
We have compared the efficacy of five imputation algorithms readily available in SAS for the quadratic discriminant function. Here, we have generated several different parametric-configuration training data with missing data, including monotone missing-at-random observations, and used a Monte Carlo simulation to examine the expected probabilities of misclassification for the two-class quadratic statistical discrimination problem under five different imputation methods. Specifically, we have compared the efficacy of the complete observation-only method and the mean substitution, regression, predictive mean matching, propensity score, and Markov Chain Monte Carlo (MCMC) imputation methods. We found that the MCMC and propensity score multiple imputation approaches are, in general, superior to the other imputation methods for the configurations and training-sample sizes we considered.  相似文献   

11.
ABSTRACT

Convergence problems often arise when complex linear mixed-effects models are fitted. Previous simulation studies (see, e.g. [Buyse M, Molenberghs G, Burzykowski T, Renard D, Geys H. The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics. 2000;1:49–67, Renard D, Geys H, Molenberghs G, Burzykowski T, Buyse M. Validation of surrogate endpoints in multiple randomized clinical trials with discrete outcomes. Biom J. 2002;44:921–935]) have shown that model convergence rates were higher (i) when the number of available clusters in the data increased, and (ii) when the size of the between-cluster variability increased (relative to the size of the residual variability). The aim of the present simulation study is to further extend these findings by examining the effect of an additional factor that is hypothesized to affect model convergence, i.e. imbalance in cluster size. The results showed that divergence rates were substantially higher for data sets with unbalanced cluster sizes – in particular when the model at hand had a complex hierarchical structure. Furthermore, the use of multiple imputation to restore ‘balance’ in unbalanced data sets reduces model convergence problems.  相似文献   

12.
In longitudinal clinical studies, after randomization at baseline, subjects are followed for a period of time for development of symptoms. The interested inference could be the mean change from baseline to a particular visit in some lab values, the proportion of responders to some threshold category at a particular visit post baseline, or the time to some important event. However, in some applications, the interest may be in estimating the cumulative distribution function (CDF) at a fixed time point post baseline. When the data are fully observed, the CDF can be estimated by the empirical CDF. When patients discontinue prematurely during the course of the study, the empirical CDF cannot be directly used. In this paper, we use multiple imputation as a way to estimate the CDF in longitudinal studies when data are missing at random. The validity of the method is assessed on the basis of the bias and the Kolmogorov–Smirnov distance. The results suggest that multiple imputation yields less bias and less variability than the often used last observation carried forward method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Four teams of analysts try to determine the existence of an association between inflammatory bowel disease and certain genetic markers on human chromosome number 6. Their investigation involves data on several control populations and on 110 familles with two or more affected individuals. The problem is introduced by Mirea, Bull, Silverberg and Siminovitch; they and three other groups (Chen, Kalbfleisch and Romero‐Hidalgo; Darlington and Paterson; Roslin, Loredo‐Osti, Greenwood and Morgan) present analyses. Their approaches are discussed by Field and Smith.  相似文献   

14.
Summary.  In microarray experiments, accurate estimation of the gene variance is a key step in the identification of differentially expressed genes. Variance models go from the too stringent homoscedastic assumption to the overparameterized model assuming a specific variance for each gene. Between these two extremes there is some room for intermediate models. We propose a method that identifies clusters of genes with equal variance. We use a mixture model on the gene variance distribution. A test statistic for ranking and detecting differentially expressed genes is proposed. The method is illustrated with publicly available complementary deoxyribonucleic acid microarray experiments, an unpublished data set and further simulation studies.  相似文献   

15.
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.  相似文献   

16.
Using a multivariate latent variable approach, this article proposes some new general models to analyze the correlated bounded continuous and categorical (nominal or/and ordinal) responses with and without non-ignorable missing values. First, we discuss regression methods for jointly analyzing continuous, nominal, and ordinal responses that we motivated by analyzing data from studies of toxicity development. Second, using the beta and Dirichlet distributions, we extend the models so that some bounded continuous responses are replaced for continuous responses. The joint distribution of the bounded continuous, nominal and ordinal variables is decomposed into a marginal multinomial distribution for the nominal variable and a conditional multivariate joint distribution for the bounded continuous and ordinal variables given the nominal variable. We estimate the regression parameters under the new general location models using the maximum-likelihood method. Sensitivity analysis is also performed to study the influence of small perturbations of the parameters of the missing mechanisms of the model on the maximal normal curvature. The proposed models are applied to two data sets: BMI, Steatosis and Osteoporosis data and Tehran household expenditure budgets.  相似文献   

17.
A false discovery rate (FDR) procedure is often employed in exploratory data analysis to determine which among thousands or millions of attributes are worthy of follow-up analysis. However, these methods tend to discover the most statistically significant attributes, which need not be the most worthy of further exploration. This article provides a new FDR-controlling method that allows for the nature of the exploratory analysis to be considered when determining which attributes are discovered. To illustrate, a study in which the objective is to classify discoveries into one of several clusters is considered, and a new FDR method that minimizes the misclassification rate is developed. It is shown analytically and with simulation that the proposed method performs better than competing methods.  相似文献   

18.
Principal component analysis (PCA) is a widely used statistical technique for determining subscales in questionnaire data. As in any other statistical technique, missing data may both complicate its execution and interpretation. In this study, six methods for dealing with missing data in the context of PCA are reviewed and compared: listwise deletion (LD), pairwise deletion, the missing data passive approach, regularized PCA, the expectation-maximization algorithm, and multiple imputation. Simulations show that except for LD, all methods give about equally good results for realistic percentages of missing data. Therefore, the choice of a procedure can be based on the ease of application or purely the convenience of availability of a technique.  相似文献   

19.
The authors propose two composite likelihood estimation procedures for multivariate models with regression/univariate and dependence parameters. One is a two‐stage method based on both univariate and bivariate margins. The other estimates all the parameters simultaneously based on bivariate margins. For some special cases, the authors compare their asymptotic efficiencies with the maximum likelihood method. The performance of the two methods is reasonable, except that the first procedure is inefficient for the regression parameters under strong dependence. The second approach is generally better for the regression parameters, but less efficient for the dependence parameters under weak dependence.  相似文献   

20.
In this paper, we propose a methodology to analyze longitudinal data through distances between pairs of observations (or individuals) with regard to the explanatory variables used to fit continuous response variables. Restricted maximum-likelihood and generalized least squares are used to estimate the parameters in the model. We applied this new approach to study the effect of gender and exposure on the deviant behavior variable with respect to tolerance for a group of youths studied over a period of 5 years. Were performed simulations where we compared our distance-based method with classic longitudinal analysis with both AR(1) and compound symmetry correlation structures. We compared them under Akaike and Bayesian information criterions, and the relative efficiency of the generalized variance of the errors of each model. We found small gains in the proposed model fit with regard to the classical methodology, particularly in small samples, regardless of variance, correlation, autocorrelation structure and number of time measurements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号