首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has been ever increasing interest in the use of microarray experiments as a basis for the provision of prediction (discriminant) rules for improved diagnosis of cancer and other diseases. Typically, the microarray cancer studies provide only a limited number of tissue samples from the specified classes of tumours or patients, whereas each tissue sample may contain the expression levels of thousands of genes. Thus researchers are faced with the problem of forming a prediction rule on the basis of a small number of classified tissue samples, which are of very high dimension. Usually, some form of feature (gene) selection is adopted in the formation of the prediction rule. As the subset of genes used in the final form of the rule have not been randomly selected but rather chosen according to some criterion designed to reflect the predictive power of the rule, there will be a selection bias inherent in estimates of the error rates of the rules if care is not taken. We shall present various situations where selection bias arises in the formation of a prediction rule and where there is a consequent need for the correction of this bias. We describe the design of cross-validation schemes that are able to correct for the various selection biases.  相似文献   

2.
We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the data set. This criterion can be applied to general prediction problems (e.g. regression or classification) and to general prediction rules (e.g. stepwise regression, tree-based models and neural nets). As a by-product we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection.  相似文献   

3.
It is often desirable to select a subset of regression variables so as to maximise the accuracy of prediction at a pre-specified point. There are a variety of possible mean-square-error-type criteria which could be used to measure the accuracy of prediction and hence to select an optimal subset. We shall show how these can easily be included in existing stepwise regression codes. The performance of the criteria is compared on a data set, where it becomes obvious that not only do different criteria give rise to different subsets at the same prediction point, but the same criterion quite commonly gives rise to different subsets at different prediction points. Thus the choice of a criterion has a major effect on the subset selected, and so requires conscious selection.  相似文献   

4.
Summary.  Model selection for marginal regression analysis of longitudinal data is challenging owing to the presence of correlation and the difficulty of specifying the full likelihood, particularly for correlated categorical data. The paper introduces a novel Bayesian information criterion type model selection procedure based on the quadratic inference function, which does not require the full likelihood or quasi-likelihood. With probability approaching 1, the criterion selects the most parsimonious correct model. Although a working correlation matrix is assumed, there is no need to estimate the nuisance parameters in the working correlation matrix; moreover, the model selection procedure is robust against the misspecification of the working correlation matrix. The criterion proposed can also be used to construct a data-driven Neyman smooth test for checking the goodness of fit of a postulated model. This test is especially useful and often yields much higher power in situations where the classical directional test behaves poorly. The finite sample performance of the model selection and model checking procedures is demonstrated through Monte Carlo studies and analysis of a clinical trial data set.  相似文献   

5.
DNA microarrays allow for measuring expression levels of a large number of genes between different experimental conditions and/or samples. Association rule mining (ARM) methods are helpful in finding associational relationships between genes. However, classical association rule mining (CARM) algorithms extract only a subset of the associations that exist among different binary states; therefore can only infer part of the relationships on gene regulations. To resolve this problem, we developed an extended association rule mining (EARM) strategy along with a new way of the association rule definition. Compared with the CARM method, our new approach extracted more frequent genesets from a public microarray data set. The EARM method discovered some biologically interesting association rules that were not detected by CARM. Therefore, EARM provides an effective tool for exploring relationships among genes.  相似文献   

6.
Ensemble methods using the same underlying algorithm trained on different subsets of observations have recently received increased attention as practical prediction tools for massive data sets. We propose Subsemble: a general subset ensemble prediction method, which can be used for small, moderate, or large data sets. Subsemble partitions the full data set into subsets of observations, fits a specified underlying algorithm on each subset, and uses a clever form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. We give an oracle result that provides a theoretical performance guarantee for Subsemble. Through simulations, we demonstrate that Subsemble can be a beneficial tool for small- to moderate-sized data sets, and often has better prediction performance than the underlying algorithm fit just once on the full data set. We also describe how to include Subsemble as a candidate in a SuperLearner library, providing a practical way to evaluate the performance of Subsemble relative to the underlying algorithm fit just once on the full data set.  相似文献   

7.
The QR-factorization provides a set of orthogonal variables which has advantages over other orthogonal representations, such as principal components and the singular-value decomposition, in selecting subsets of regression variables by least squares methods. Stopping rules, in particular, are easily understood. A new stopping rule is derived for prediction. This is derived by approximately minimizing the mean squared error in estimating the squared error of prediction. A clear distinction is made between the kind of stopping rule which is relevant when the objective is prediction, and when the objective is asymptotic consistency. Progress with reducing the bias due to the model selection procedure is briefly summarized.  相似文献   

8.
This paper considers the problem where the linear discriminant rule is formed from training data that are only partially classified with respect to the two groups of origin. A further complication is that the data of unknown origin do not constitute an observed random sample from a mixture of the two under- lying groups. Under the assumption of a homoscedastic normal model, the overall error rate of the sample linear discriminant rule formed by maximum likelihood from the partially classified training data is derived up to and including terms of the first order in the case of univariate feature data. This first- order expansion of the sample rule so formed is used to define its asymptotic efficiency relative to the rule formed from a completely classified random training set and also to the rule formed from a completely unclassified random set.  相似文献   

9.
In real‐data analysis, deciding the best subset of variables in regression models is an important problem. Akaike's information criterion (AIC) is often used in order to select variables in many fields. When the sample size is not so large, the AIC has a non‐negligible bias that will detrimentally affect variable selection. The present paper considers a bias correction of AIC for selecting variables in the generalized linear model (GLM). The GLM can express a number of statistical models by changing the distribution and the link function, such as the normal linear regression model, the logistic regression model, and the probit model, which are currently commonly used in a number of applied fields. In the present study, we obtain a simple expression for a bias‐corrected AIC (corrected AIC, or CAIC) in GLMs. Furthermore, we provide an ‘R’ code based on our formula. A numerical study reveals that the CAIC has better performance than the AIC for variable selection.  相似文献   

10.
We restrict attention to a class of Bernoulli subset selection procedures which take observations one-at-a-time and can be compared directly to the Gupta-Sobel single-stage procedure. For the criterion of minimizing the expected total number of observations required to terminate experimentation, we show that optimal sampling rules within this class are not of practical interest. We thus turn to procedures which, although not optimal, exhibit desirable behavior with regard to this criterion. A procedure which employs a modification of the so-called least-failures sampling rule is proposed, and is shown to possess many desirable properties among a restricted class of Bernoulli subset selection procedures. Within this class, it is optimal for minimizing the number of observations taken from populations excluded from consideration following a subset selection experiment, and asymptotically optimal for minimizing the expected total number of observations required. In addition, it can result in substantial savings in the expected total num¬ber of observations required as compared to a single-stage procedure, thus it may be de¬sirable to a practitioner if sampling is costly or the sample size is limited.  相似文献   

11.
Autoregressive model is a popular method for analysing the time dependent data, where selection of order parameter is imperative. Two commonly used selection criteria are the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), which are known to suffer the potential problems regarding overfit and underfit, respectively. To our knowledge, there does not exist a criterion in the literature that can satisfactorily perform under various situations. Therefore, in this paper, we focus on forecasting the future values of an observed time series and propose an adaptive idea to combine the advantages of AIC and BIC but to mitigate their weaknesses based on the concept of generalized degrees of freedom. Instead of applying a fixed criterion to select the order parameter, we propose an approximately unbiased estimator of mean squared prediction errors based on a data perturbation technique for fairly comparing between AIC and BIC. Then use the selected criterion to determine the final order parameter. Some numerical experiments are performed to show the superiority of the proposed method and a real data set of the retail price index of China from 1952 to 2008 is also applied for illustration.  相似文献   

12.
Several studies have shown that at the individual level there exists a negative relationship between age at first birth and completed fertility. Using twin data in order to control for unobserved heterogeneity as possible source of bias, Kohler et al. (2001) showed the significant presence of such "postponement effect" at the micro level. In this paper, we apply sample selection models, where selection is based on having or not having had a first birth at all, to estimate the impact of postponing first births on subsequent fertility for four European nations, three of which have now lowest-low fertility levels. We use data from a set of comparative surveys (Fertility and Family Surveys), and we apply sample selection models on the logarithm of total fertility and on the progression to the second birth. Our results show that postponement effects are only very slightly affected by sample selection biases, so that sample selection models do not improve significantly the results of standard regression techniques on selected samples. Our results confirm that the postponement effect is higher in countries with lowest-low fertility levels.  相似文献   

13.
We show how to infer about a finite population proportion using data from a possibly biased sample. In the absence of any selection bias or survey weights, a simple ignorable selection model, which assumes that the binary responses are independent and identically distributed Bernoulli random variables, is not unreasonable. However, this ignorable selection model is inappropriate when there is a selection bias in the sample. We assume that the survey weights (or their reciprocals which we call ‘selection’ probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the selection bias, we assume that there is some correlation between the binary responses and the selection probabilities (e.g., there may be a somewhat higher/lower proportion of positive responses among the sampled units than among the nonsampled units). We use a Bayesian nonignorable selection model to accommodate the selection mechanism. We use Markov chain Monte Carlo methods to fit the nonignorable selection model. We illustrate our method using numerical examples obtained from NHIS 1995 data.  相似文献   

14.
ABSTRACT

We present a decomposition of prediction error for the multilevel model in the context of predicting a future observable y *j in the jth group of a hierarchical dataset. The multilevel prediction rule is used for prediction and the components of prediction error are estimated via a simulation study that spans the various combinations of level-1 (individual) and level-2 (group) sample sizes and different intraclass correlation values. Additionally, analytical results present the increase in predicted mean square error (PMSE) with respect to prediction error bias. The components of prediction error provide information with respect to the cost of parameter estimation versus data imputation for predicting future values in a hierarchical data set. Specifically, the cost of parameter estimation is very small compared to data imputation.  相似文献   

15.
The problem of selecting exponential populations better than a control under a simple ordering prior is investigated. Based on some prior information, it is appropriate to set lower bounds for the concerned parameters. The information about the lower bounds of the concerned parameters is taken into account to derive isotonic selection rules for the control known case. An isotonic selection rule for the control unknown case is also proposed. A criterion is proposed to evaluate the performance of the selection rules. Simulation comparisons among the performances of several selection rules are carried out. The simulation results indicate that for the control known case, the new proposed selection rules perform better than some earlier existing selection rules.  相似文献   

16.
The sample selection bias problem occurs when the outcome of interest is only observed according to some selection rule, where there is a dependence structure between the outcome and the selection rule. In a pioneering work, J. Heckman proposed a sample selection model based on a bivariate normal distribution for dealing with this problem. Due to the non-robustness of the normal distribution, many alternatives have been introduced in the literature by assuming extensions of the normal distribution like the Student-t and skew-normal models. One common limitation of the existent sample selection models is that they require a transformation of the outcome of interest, which is common R+-valued, such as income and wage. With this, data are analyzed on a non-original scale which complicates the interpretation of the parameters. In this paper, we propose a sample selection model based on the bivariate Birnbaum–Saunders distribution, which has the same number of parameters that the classical Heckman model. Further, our associated outcome equation is R+-valued. We discuss estimation by maximum likelihood and present some Monte Carlo simulation studies. An empirical application to the ambulatory expenditures data from the 2001 Medical Expenditure Panel Survey is presented.  相似文献   

17.
In high throughput genomic studies, an important goal is to identify a small number of genomic markers that are associated with development and progression of diseases. A representative example is microarray prognostic studies, where the goal is to identify genes whose expressions are associated with disease free or overall survival. Because of the high dimensionality of gene expression data, standard survival analysis techniques cannot be directly applied. In addition, among the thousands of genes surveyed, only a subset are disease-associated. Gene selection is needed along with estimation. In this article, we model the relationship between gene expressions and survival using the accelerated failure time (AFT) models. We use the bridge penalization for regularized estimation and gene selection. An efficient iterative computational algorithm is proposed. Tuning parameters are selected using V-fold cross validation. We use a resampling method to evaluate the prediction performance of bridge estimator and the relative stability of identified genes. We show that the proposed bridge estimator is selection consistent under appropriate conditions. Analysis of two lymphoma prognostic studies suggests that the bridge estimator can identify a small number of genes and can have better prediction performance than the Lasso.  相似文献   

18.
The autoregressive (AR) model is a popular method for fitting and prediction in analyzing time-dependent data, where selecting an accurate model among considered orders is a crucial issue. Two commonly used selection criteria are the Akaike information criterion and the Bayesian information criterion. However, the two criteria are known to suffer potential problems regarding overfit and underfit, respectively. Therefore, using them would perform well in some situations, but poorly in others. In this paper, we propose a new criterion in terms of the prediction perspective based on the concept of generalized degrees of freedom for AR model selection. We derive an approximately unbiased estimator of mean-squared prediction errors based on a data perturbation technique for selecting the order parameter, where the estimation uncertainty involved in a modeling procedure is considered. Some numerical experiments are performed to illustrate the superiority of the proposed method over some commonly used order selection criteria. Finally, the methodology is applied to a real data example to predict the weekly rate of return on the stock price of Taiwan Semiconductor Manufacturing Company and the results indicate that the proposed method is satisfactory.  相似文献   

19.
ABSTRACT

Modeling diagnostics assess models by means of a variety of criteria. Each criterion typically performs its evaluation upon a specific inferential objective. For instance, the well-known DFBETAS in linear regression models are a modeling diagnostic which is applied to discover the influential cases in fitting a model. To facilitate the evaluation of generalized linear mixed models (GLMM), we develop a diagnostic for detecting influential cases based on the information complexity (ICOMP) criteria for detecting influential cases which substantially affect the model selection criterion ICOMP. In a given model, the diagnostic compares the ICOMP criterion between the full data set and a case-deleted data set. The computational formula of the ICOMP criterion is evaluated using the Fisher information matrix. A simulation study is accomplished and a real data set of cancer cells is analyzed using the logistic linear mixed model for illustrating the effectiveness of the proposed diagnostic in detecting the influential cases.  相似文献   

20.
An usual approach for selection of the best subset AR model of known maximal order is to use an appropriate information criterion, like AIC or SIC with an exhaustive selection of regressors and to choose the subset model that produces the optimum (minimum) value of AIC or SIC. This method is computationally intensive. Proposed is a method based on the use of singular value decomposition and QR with column pivoting factorization for extracting a reduced subset from the exhaustive candidate set of regressors and to use AIC or SIC on the reduced subset to obtain the best subset AR model. The result is substantially reduced domain of exhaustive search for the computation of the best subset AR model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号