首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
Summary.  Existing Bayesian model selection procedures require the specification of prior distributions on the parameters appearing in every model in the selection set. In practice, this requirement limits the application of Bayesian model selection methodology. To overcome this limitation, we propose a new approach towards Bayesian model selection that uses classical test statistics to compute Bayes factors between possible models. In several test cases, our approach produces results that are similar to previously proposed Bayesian model selection and model averaging techniques in which prior distributions were carefully chosen. In addition to eliminating the requirement to specify complicated prior distributions, this method offers important computational and algorithmic advantages over existing simulation-based methods. Because it is easy to evaluate the operating characteristics of this procedure for a given sample size and specified number of covariates, our method facilitates the selection of hyperparameter values through prior-predictive simulation.  相似文献   

2.
A procedure is developed for the identification of autoregressive models for stationary invertible multivariate Gaussian time series. Model selection is based on either the AIC information criterion or on a statistic called CVR, cross-validatory residual sum of squares. An example is given to show that the forecasts generated by these models compare favorably with those generated by other common time series modeling techniques.  相似文献   

3.
Stochastic volatility models have been widely appreciated in empirical finance such as option pricing, risk management, etc. Recent advances of Markov chain Monte Carlo (MCMC) techniques made it possible to fit all kinds of stochastic volatility models of increasing complexity within Bayesian framework. In this article, we propose a new Bayesian model selection procedure based on Bayes factor and a classical thermodynamic integration technique named path sampling to select an appropriate stochastic volatility model. The performance of the developed procedure is illustrated with an application to the daily pound/dollar exchange rates data set.  相似文献   

4.
Feature selection (FS) is one of the most powerful techniques to cope with the curse of dimensionality. In the study, a new filter approach to feature selection based on distance correlation is presented (DCFS, for short), which keeps the model-free advantage without any pre-specified parameters. Our method consists of two steps: hard step (forward selection) and soft step (backward selection). In the hard step, two types of associations, between univariate feature and the classes and between group feature and the classes, are involved to pick out the most relevant features with respect to the target classes. Due to the strict screening condition in the first step, some of the useful features are likely removed. Therefore, in the soft step, a feature-relationship gain (like feature score) based on the distance correlation is introduced, which is concerned with five kinds of associations. We sort the feature gain values and implement the backward selection procedure until the errors stop declining. The simulation results show that our method becomes more competitive on several datasets compared with some of the representative feature selection methods based on several classification models.  相似文献   

5.
The sample coordination problem involves maximization or minimization of overlap of sampling units in different/repeated surveys. Several optimal techniques using transportation theory, controlled rounding, and controlled selection have been suggested in literature to solve the sample coordination problem. In this article, using the multiple objective programming, we propose a method for sample coordination which facilitates variance estimation using the Horvitz–Thompson estimator. The proposed procedure can be applied to any two-sample surveys having identical universe and stratification. Some examples are discussed to demonstrate the utility of the proposed procedure.  相似文献   

6.
In this paper, we focus on the problem of factor screening in nonregular two-level designs through gradually reducing the number of possible sets of active factors. We are particularly concerned with situations when three or four factors are active. Our proposed method works through examining fits of projection models, where variable selection techniques are used to reduce the number of terms. To examine the reliability of the methods in combination with such techniques, a panel of models consisting of three or four active factors with data generated from the 12-run and the 20-run Plackett–Burman (PB) design is used. The dependence of the procedure on the amount of noise, the number of active factors and the number of experimental factors is also investigated. For designs with few runs such as the 12-run PB design, variable selection should be done with care and default procedures in computer software may not be reliable to which we suggest improvements. A real example is included to show how we propose factor screening can be done in practice.  相似文献   

7.
In this article, we study the problem of selecting the best population from among several exponential populations based on interval censored samples using a Bayesian approach. A Bayes selection procedure and a curtailed Bayes selection procedure are derived. We show that these two Bayes selection procedures are equivalent. A numerical example is provided to illustrate the application of the two selection procedure. We also use Monte Carlo simulation to study performance of the two selection procedures. The numerical results of the simulation study demonstrate that the curtailed Bayes selection procedure has good performance because it can substantially reduce the duration time of life test experiment.  相似文献   

8.
This study examines the comparative probabilities of making a correct selection when using the means procedure (M), the medians procedure (D) and the rank-sum procedure (S) to correctly select the normal population with the largest mean under heterogeneity of variance. The comparison is conducted by using Monte-Carlo simulation techniques for 3, 4, and 5 normal populations under the condition that equal sample sizes are taken from each population. The population means and standard deviations are assumed to be equally-spaced. Two types of heterogeneity of variance are considered: (1) associating larger means with larger variances, and (2) associating larger means with smaller variances.  相似文献   

9.
Subset selection procedures based on ranks have been investigated by a number of authors previously. Their methods are based on ranking the samples from all the populations jointly. However, as was pointed out by Rizvi and Woodworth (1970), the procedures they proposed cannot control the probability of a correct selection over the entire parameter space. In this paper, we propose a subset selection procedure based on pairwise rather than joint ranking of the samples. It is shown that this procedure controls the probability of a correct selection over the entire parameter space. It is also shown that the Pitman efficiency of this nonparametric procedure relative to the multivariate t procedure of Gupta (1956, 1965) is the same as the Pitman efficiency of the Mann-Whitney-Wilcoxon test relative to the t-test.  相似文献   

10.
A subset selection procedure is developed for selecting a subset containing the multinomial population that has the highest value of a certain linear combination of the multinomial cell probabilities; such population is called the ‘best’. The multivariate normal large sample approximation to the multinomial distribution is used to derive expressions for the probability of a correct selection, and for the threshold constant involved in the procedure. The procedure guarantees that the probability of a correct selection is at least at a pre-assigned level. The proposed procedure is an extension of Gupta and Sobel's [14] selection procedure for binomials and of Bakir's [2] restrictive selection procedure for multinomials. One illustration of the procedure concerns population income mobility in four countries: Peru, Russia, South Africa and the USA. Analysis indicates that Russia and Peru fall in the selected subset containing the best population with respect to income mobility from poverty to a higher-income status. The procedure is also applied to data concerning grade distribution for students in a certain freshman class.  相似文献   

11.
The main problem with localized discriminant techniques is the curse of dimensionality, which seems to restrict their use to the case of few variables. However, if localization is combined with a reduction of dimension the initial number of variables is less restricted. In particular it is shown that localization yields powerful classifiers even in higher dimensions if localization is combined with locally adaptive selection of predictors. A robust localized logistic regression (LLR) method is developed for which all tuning parameters are chosen data-adaptively. In an extended simulation study we evaluate the potential of the proposed procedure for various types of data and compare it to other classification procedures. In addition we demonstrate that automatic choice of localization, predictor selection and penalty parameters based on cross validation is working well. Finally the method is applied to real data sets and its real world performance is compared to alternative procedures.  相似文献   

12.
Non‐parametric estimation and bootstrap techniques play an important role in many areas of Statistics. In the point process context, kernel intensity estimation has been limited to exploratory analysis because of its inconsistency, and some consistent alternatives have been proposed. Furthermore, most authors have considered kernel intensity estimators with scalar bandwidths, which can be very restrictive. This work focuses on a consistent kernel intensity estimator with unconstrained bandwidth matrix. We propose a smooth bootstrap for inhomogeneous spatial point processes. The consistency of the bootstrap mean integrated squared error (MISE) as an estimator of the MISE of the consistent kernel intensity estimator proves the validity of the resampling procedure. Finally, we propose a plug‐in bandwidth selection procedure based on the bootstrap MISE and compare its performance with several methods currently used through both as a simulation study and an application to the spatial pattern of wildfires registered in Galicia (Spain) during 2006.  相似文献   

13.
A general threshold stress hybrid hazard model for lifetime data   总被引:1,自引:1,他引:0  
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.  相似文献   

14.
There are several procedures for fitting generalized additive models, i.e. regression models for an exponential family response where the influence of each single covariates is assumed to have unknown, potentially non-linear shape. Simulated data are used to compare a smoothing parameter optimization approach for selection of smoothness and of covariates, a stepwise approach, a mixed model approach, and a procedure based on boosting techniques. In particular it is investigated how the performance of procedures is linked to amount of information, type of response, total number of covariates, number of influential covariates, and extent of non-linearity. Measures for comparison are prediction performance, identification of influential covariates, and smoothness of fitted functions. One result is that the mixed model approach returns sparse fits with frequently over-smoothed functions, while the functions are less smooth for the boosting approach and variable selection is less strict. The other approaches are in between with respect to these measures. The boosting procedure is seen to perform very well when little information is available and/or when a large number of covariates is to be investigated. It is somewhat surprising that in scenarios with low information the fitting of a linear model, even with stepwise variable selection, has not much advantage over the fitting of an additive model when the true underlying structure is linear. In cases with more information the prediction performance of all procedures is very similar. So, in difficult data situations the boosting approach can be recommended, in others the procedures can be chosen conditional on the aim of the analysis.  相似文献   

15.
In this paper, we focus on the variable selection for the semiparametric regression model with longitudinal data when some covariates are measured with errors. A new bias-corrected variable selection procedure is proposed based on the combination of the quadratic inference functions and shrinkage estimations. With appropriate selection of the tuning parameters, we establish the consistency and asymptotic normality of the resulting estimators. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedure. We further illustrate the proposed procedure with an application.  相似文献   

16.
In this paper we are concerned with the problems of variable selection and estimation in double generalized linear models in which both the mean and the dispersion are allowed to depend on explanatory variables. We propose a maximum penalized pseudo-likelihood method when the number of parameters diverges with the sample size. With appropriate selection of the tuning parameters, the consistency of the variable selection procedure and asymptotic properties of the resulting estimators are established. We also carry out simulation studies and a real data analysis to assess the finite sample performance of the proposed variable selection procedure, showing that the proposed variable selection method works satisfactorily.  相似文献   

17.
We consider the problem of variable selection for a class of varying coefficient models with instrumental variables. We focus on the case that some covariates are endogenous variables, and some auxiliary instrumental variables are available. An instrumental variable based variable selection procedure is proposed by using modified smooth-threshold estimating equations (SEEs). The proposed procedure can automatically eliminate the irrelevant covariates by setting the corresponding coefficient functions as zero, and simultaneously estimate the nonzero regression coefficients by solving the smooth-threshold estimating equations. The proposed variable selection procedure avoids the convex optimization problem, and is flexible and easy to implement. Simulation studies are carried out to assess the performance of the proposed variable selection method.  相似文献   

18.
In this paper, we consider the problem of variable selection for partially varying coefficient single-index model, and present a regularized variable selection procedure by combining basis function approximations with smoothly clipped absolute deviation penalty. The proposed procedure simultaneously selects significant variables in the single-index parametric components and the nonparametric coefficient function components. With appropriate selection of the tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Finite sample performance of the proposed method is illustrated by a simulation study and real data analysis.  相似文献   

19.
We consider the problem of variable selection in high-dimensional partially linear models with longitudinal data. A variable selection procedure is proposed based on the smooth-threshold generalized estimating equation (SGEE). The proposed procedure automatically eliminates inactive predictors by setting the corresponding parameters to be zero, and simultaneously estimates the nonzero regression coefficients by solving the SGEE. We establish the asymptotic properties in a high-dimensional framework where the number of covariates pn increases as the number of clusters n increases. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedure.  相似文献   

20.
In this article we present a robust and efficient variable selection procedure by using modal regression for varying-coefficient models with longitudinal data. The new method is proposed based on basis function approximations and a group version of the adaptive LASSO penalty, which can select significant variables and estimate the non-zero smooth coefficient functions simultaneously. Under suitable conditions, we establish the consistency in variable selection and the oracle property in estimation. A simulation study and two real data examples are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号