首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Rank tests are known to be robust to outliers and violation of distributional assumptions. Two major issues besetting microarray data are violation of the normality assumption and contamination by outliers. In this article, we formulate the normal theory simultaneous tests and their aligned rank transformation (ART) analog for detecting differentially expressed genes. These tests are based on the least-squares estimates of the effects when data follow a linear model. Application of the two methods are then demonstrated on a real data set. To evaluate the performance of the aligned rank transform method with the corresponding normal theory method, data were simulated according to the characteristics of a real gene expression data. These simulated data are then used to compare the two methods with respect to their sensitivity to the distributional assumption and to outliers for controlling the family-wise Type I error rate, power, and false discovery rate. It is demonstrated that the ART generally possesses the robustness of validity property even for microarray data with small number of replications. Although these methods can be applied to more general designs, in this article the simulation study is carried out for a dye-swap design since this design is broadly used in cDNA microarray experiments.  相似文献   

2.
Outliers are commonly observed in psychosocial research, generally resulting in biased estimates when comparing group differences using popular mean-based models such as the analysis of variance model. Rank-based methods such as the popular Mann–Whitney–Wilcoxon (MWW) rank sum test are more effective to address such outliers. However, available methods for inference are limited to cross-sectional data and cannot be applied to longitudinal studies under missing data. In this paper, we propose a generalized MWW test for comparing multiple groups with covariates within a longitudinal data setting, by utilizing the functional response models. Inference is based on a class of U-statistics-based weighted generalized estimating equations, providing consistent and asymptotically normal estimates not only under complete but missing data as well. The proposed approach is illustrated with both real and simulated study data.  相似文献   

3.
Smoothed Gehan rank estimation methods are widely used in accelerated failure time (AFT) models with/without clusters. However, most methods are sensitive to outliers in the covariates. In order to solve this problem, we propose robust approaches based on the smoothed Gehan rank estimation methods for the AFT model, allowing for clusters by employing two different weight functions. Simulation studies show that the proposed methods outperform existing smoothed rank estimation methods regarding their biases and standard deviations when there are outliers in the covariates. The proposed methods are also applied to a real dataset from the “Major cardiovascular interventions” study.  相似文献   

4.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

5.
During drug development, the calculation of inhibitory concentration that results in a response of 50% (IC50) is performed thousands of times every day. The nonlinear model most often used to perform this calculation is a four‐parameter logistic, suitably parameterized to estimate the IC50 directly. When performing these calculations in a high‐throughput mode, each and every curve cannot be studied in detail, and outliers in the responses are a common problem. A robust estimation procedure to perform this calculation is desirable. In this paper, a rank‐based estimate of the four‐parameter logistic model that is analogous to least squares is proposed. The rank‐based estimate is based on the Wilcoxon norm. The robust procedure is illustrated with several examples from the pharmaceutical industry. When no outliers are present in the data, the robust estimate of IC50 is comparable with the least squares estimate, and when outliers are present in the data, the robust estimate is more accurate. A robust goodness‐of‐fit test is also proposed. To investigate the impact of outliers on the traditional and robust estimates, a small simulation study was conducted. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
ABSTRACT

Advances in statistical computing software have led to a substantial increase in the use of ordinary least squares (OLS) regression models in the engineering and applied statistics communities. Empirical evidence suggests that data sets can routinely have 10% or more outliers in many processes. Unfortunately, these outliers typically will render the OLS parameter estimates useless. The OLS diagnostic quantities and graphical plots can reliably identify a few outliers; however, they significantly lose power with increasing dimension and number of outliers. Although there have been recent advances in the methods that detect multiple outliers, improvements are needed in regression estimators that can fit well in the presence of outliers. We introduce a robust regression estimator that performs well regardless of outlier quantity and configuration. Our studies show that the best available estimators are vulnerable when the outliers are extreme in the regressor space (high leverage). Our proposed compound estimator modifies recently published methods with an improved initial estimate and measure of leverage. Extensive performance evaluations indicate that the proposed estimator performs the best and consistently fits the bulk of the data when outliers are present. The estimator, implemented in standard software, provides researchers and practitioners a tool for the model-building process to protect against the severe impact from multiple outliers.  相似文献   

7.
The Zero-inflated Poisson distribution has been used in the modeling of count data in different contexts. This model tends to be influenced by outliers because of the excessive occurrence of zeroes, thus outlier identification and robust parameter estimation are important for such distribution. Some outlier identification methods are studied in this paper, and their applications and results are also presented with an example. To eliminate the effect of outliers, two robust parameter estimates are proposed based on the trimmed mean and the Winsorized mean. Simulation results show the robustness of our proposed parameter estimates.  相似文献   

8.
Principal component analysis (PCA) is a popular technique that is useful for dimensionality reduction but it is affected by the presence of outliers. The outlier sensitivity of classical PCA (CPCA) has caused the development of new approaches. Effects of using estimates obtained by expectation–maximization – EM and multiple imputation – MI instead of outliers were examined on the artificial and a real data set. Furthermore, robust PCA based on minimum covariance determinant (MCD), PCA based on estimates obtained by EM instead of outliers and PCA based on estimates obtained by MI instead of outliers were compared with the results of CPCA. In this study, we tried to show the effects of using estimates obtained by MI and EM instead of outliers, depending on the ratio of outliers in data set. Finally, when the ratio of outliers exceeds 20%, we suggest the use of estimates obtained by MI and EM instead of outliers as an alternative approach.  相似文献   

9.
The problem of fitting circles and circular arcs to observed points arises in many areas of science. However, the fitting results by using most geometric and algebraic methods are not usually acceptable in the presence of outliers. An iterative procedure for robust circle fitting is proposed. During the iteration, Taubin's method is employed to obtain the center and radius. And then the geometric distances from the data points to the circle are computed, with which outliers are identified and removed. Numerical examples demonstrate that the proposed iterative procedure can alleviate the corrupted effect of outliers on the circle parameter estimates.  相似文献   

10.
Mixed effects models or random effects models are popular for the analysis of longitudinal data. In practice, longitudinal data are often complex since there may be outliers in both the response and the covariates and there may be measurement errors. The likelihood method is a common approach for these problems but it can be computationally very intensive and sometimes may even be computationally infeasible. In this article, we consider approximate robust methods for nonlinear mixed effects models to simultaneously address outliers and measurement errors. The approximate methods are computationally very efficient. We show the consistency and asymptotic normality of the approximate estimates. The methods can also be extended to missing data problems. An example is used to illustrate the methods and a simulation is conducted to evaluate the methods.  相似文献   

11.
One of the standard variable selection procedures in multiple linear regression is to use a penalisation technique in least‐squares (LS) analysis. In this setting, many different types of penalties have been introduced to achieve variable selection. It is well known that LS analysis is sensitive to outliers, and consequently outliers can present serious problems for the classical variable selection procedures. Since rank‐based procedures have desirable robustness properties compared to LS procedures, we propose a rank‐based adaptive lasso‐type penalised regression estimator and a corresponding variable selection procedure for linear regression models. The proposed estimator and variable selection procedure are robust against outliers in both response and predictor space. Furthermore, since rank regression can yield unstable estimators in the presence of multicollinearity, in order to provide inference that is robust against multicollinearity, we adjust the penalty term in the adaptive lasso function by incorporating the standard errors of the rank estimator. The theoretical properties of the proposed procedures are established and their performances are investigated by means of simulations. Finally, the estimator and variable selection procedure are applied to the Plasma Beta‐Carotene Level data set.  相似文献   

12.
Some recent contributions to robust data analysis and multiple outlier detection are discussed. Two methods of analysis producing robust estimates and sets of weights which may be inspected for outliers are described and compared. Some examples of their application are given to support the recommendation that both ordinary least squares and a robust method of analysis should be part of routine data analysis.  相似文献   

13.
Ordinal data, such as student's grades or customer satisfaction surveys, are widely used in daily life. We can fit a probit or logistic regression model to the ordinal data using software such as SAS and get the estimates of regression parameters. However, it is hard to define residuals and detect outliers due to the fact that the estimated probabilities of an observation falling in every category form a vector instead of a scalar. With the help of latent variable and latent residuals, a Bayesian perspective of detecting outliers is explored and several methods were proposed in this article. Several figures are also given.  相似文献   

14.
The least squares estimates of the parameters in the multistage dose-response model are unduly affected by outliers in a data set whereas the minimum sum of absolute errors, MSAE estimates are more resistant to outliers. Algorithms to compute the MSAE estimates can be tedious and computationally burdensome. We propose a linear approximation for the dose-response model that can be used to find the MSAE estimates by a simple and computationally less intensive algorithm. A few illustrative ex-amples and a Monte Carlo study show that we get comparable values of the MSAE estimates of the parameters in a dose-response model using the exact model and the linear approximation.  相似文献   

15.
Abstract

Constrained M (CM) estimates of multivariate location and scatter [Kent, J. T., Tyler, D. E. (1996). Constrained M-estimation for multivariate location and scatter. Ann. Statist. 24:1346–1370] are defined as the global minimum of an objective function subject to a constraint. These estimates combine the good global robustness properties of the S estimates and the good local robustness properties of the redescending M estimates. The CM estimates are not explicitly defined. Numerical methods have to be used to compute the CM estimates. In this paper, we give an algorithm to compute the CM estimates. Using the algorithm, we give a small simulation study to demonstrate the capability of the algorithm finding the CM estimates, and also to explore the finite sample behavior of the CM estimates. We also use the CM estimators to estimate the location and scatter parameters of some multivariate data sets to see the performance of the CM estimates dealing with the real data sets that may contain outliers.  相似文献   

16.
Regression procedures are not only hindered by large p and small n, but can also suffer in cases when outliers are present or the data generating mechanisms are heavy tailed. Since the penalized estimates like the least absolute shrinkage and selection operator (LASSO) are equipped to deal with the large p small n by encouraging sparsity, we combine a LASSO type penalty with the absolute deviation loss function, instead of the standard least squares loss, to handle the presence of outliers and heavy tails. The model is cast in a Bayesian setting and a Gibbs sampler is derived to efficiently sample from the posterior distribution. We compare our method to existing methods in a simulation study as well as on a prostate cancer data set and a base deficit data set from trauma patients.  相似文献   

17.
The power of some rank tests, used for testing the hypothesis of shift, is found when the underlying distributions contain outliers. The outliers are assumed to occur as the result of mixing two normal distributions with common variance. A small sample case shows how the scores for the rank tests are found and the exact power is computed for each of these rank tests. A Monte Carlo study provides an estimate of the power of the usual two sample t-test.  相似文献   

18.
The purpose of this paper is to examine the multiple group (>2) discrimination problem in which the group sizes are unequal and the variables used in the classification are correlated with skewed distributions. Using statistical simulation based on data from a clinical study, we compare the performances, in terms of misclassification rates, of nine statistical discrimination methods. These methods are linear and quadratic discriminant analysis applied to untransformed data, rank transformed data, and inverse normal scores data, as well as fixed kernel discriminant analysis, variable kernel discriminant analysis, and variable kernel discriminant analysis applied to inverse normal scores data. It is found that the parametric methods with transformed data generally outperform the other methods, and the parametric methods applied to inverse normal scores usually outperform the parametric methods applied to rank transformed data. Although the kernel methods often have very biased estimates, the variable kernel method applied to inverse normal scores data provides considerable improvement in terms of total nonerror rate.  相似文献   

19.
The pretest–posttest design is widely used to investigate the effect of an experimental treatment in biomedical research. The treatment effect may be assessed using analysis of variance (ANOVA) or analysis of covariance (ANCOVA). The normality assumption for parametric ANOVA and ANCOVA may be violated due to outliers and skewness of data. Nonparametric methods, robust statistics, and data transformation may be used to address the nonnormality issue. However, there is no simultaneous comparison for the four statistical approaches in terms of empirical type I error probability and statistical power. We studied 13 ANOVA and ANCOVA models based on parametric approach, rank and normal score-based nonparametric approach, Huber M-estimation, and Box–Cox transformation using normal data with and without outliers and lognormal data. We found that ANCOVA models preserve the nominal significance level better and are more powerful than their ANOVA counterparts when the dependent variable and covariate are correlated. Huber M-estimation is the most liberal method. Nonparametric ANCOVA, especially ANCOVA based on normal score transformation, preserves the nominal significance level, has good statistical power, and is robust for data distribution.  相似文献   

20.
We considered the problem of estimating effects in the following linear model for data arranged in a two-way table: Response = Common effect + Row effect + Column effect + Residual. This work was occasioned by a project to analyse Federal Aviation Administration (FAA) data on daily temporal deviations from flight plans for commercial US flights, with rows and columns representing origin and destination airports, respectively. We conducted a large Monte Carlo study comparing the accuracy of three methods of estimation: classical least squares, median polish and least absolute deviations (LAD). The experiments included a wide spectrum of tables of different sizes and shapes, with different levels of non-linearity, noise variance, and percentages of empty cells and outliers. We based our comparison on the accuracy of the estimates and on computational speed. We identified factors that significantly affect accuracy and speed, and compared the methods based on their sensitivity to these factors. We concluded that there is no dominant method of estimation and identified conditions under which each method is most attractive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号