首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   183篇
  免费   10篇
  国内免费   1篇
管理学   2篇
丛书文集   1篇
综合类   17篇
统计学   174篇
  2023年   1篇
  2021年   3篇
  2020年   2篇
  2019年   8篇
  2018年   7篇
  2017年   20篇
  2016年   8篇
  2015年   8篇
  2014年   2篇
  2013年   66篇
  2012年   24篇
  2011年   4篇
  2010年   3篇
  2009年   1篇
  2008年   2篇
  2007年   4篇
  2006年   3篇
  2005年   6篇
  2004年   2篇
  2002年   1篇
  2000年   4篇
  1999年   6篇
  1998年   2篇
  1994年   1篇
  1992年   1篇
  1990年   3篇
  1986年   1篇
  1975年   1篇
排序方式: 共有194条查询结果,搜索用时 15 毫秒
61.
The Bayesian CART (classification and regression tree) approach proposed by Chipman, George and McCulloch (1998) entails putting a prior distribution on the set of all CART models and then using stochastic search to select a model. The main thrust of this paper is to propose a new class of hierarchical priors which enhance the potential of this Bayesian approach. These priors indicate a preference for smooth local mean structure, resulting in tree models which shrink predictions from adjacent terminal node towards each other. Past methods for tree shrinkage have searched for trees without shrinking, and applied shrinkage to the identified tree only after the search. By using hierarchical priors in the stochastic search, the proposed method searches for shrunk trees that fit well and improves the tree through shrinkage of predictions.  相似文献   
62.
In this approach, some generalized ridge estimators are defined based on shrinkage foundation. Completely under the suspicion that some sub-space restrictions may occur, we present the estimators of the regression coefficients combining the idea of preliminary test estimator and Stein-rule estimator with the ridge regression methodology for normal models. Their exact risk expressions in addition to biases are derived and the regions of optimality of the estimators are exactly determined along with some numerical analysis. In this regard, the ridge parameter is determined in different disciplines.  相似文献   
63.
In this paper we carefully examine three propositions asserting that certain simulation studies are biased in favour of ridge regression and find them to be ill-founded.  相似文献   
64.
This paper considers estimation of an unknown distribution parameter in situations where we believe that the parameter belongs to a finite interval. We propose for such situations an interval shrinkage approach which combines in a coherent way an unbiased conventional estimator and non-sample information about the range of plausible parameter values. The approach is based on an infeasible interval shrinkage estimator which uniformly dominates the underlying conventional estimator with respect to the mean square error criterion. This infeasible estimator allows us to obtain useful feasible counterparts. The properties of these feasible interval shrinkage estimators are illustrated both in a simulation study and in empirical examples.  相似文献   
65.
In the past decades, the number of variables explaining observations in different practical applications increased gradually. This has led to heavy computational tasks, despite of widely using provisional variable selection methods in data processing. Therefore, more methodological techniques have appeared to reduce the number of explanatory variables without losing much of the information. In these techniques, two distinct approaches are apparent: ‘shrinkage regression’ and ‘sufficient dimension reduction’. Surprisingly, there has not been any communication or comparison between these two methodological categories, and it is not clear when each of these two approaches are appropriate. In this paper, we fill some of this gap by first reviewing each category in brief, paying special attention to the most commonly used methods in each category. We then compare commonly used methods from both categories based on their accuracy, computation time, and their ability to select effective variables. A simulation study on the performance of the methods in each category is generated as well. The selected methods are concurrently tested on two sets of real data which allows us to recommend conditions under which one approach is more appropriate to be applied to high-dimensional data.  相似文献   
66.
Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of high‐dimensional linear regression. In recent work, a Bayesian shrinkage approach using generalized double Pareto priors has been proposed. Several useful properties of this approach, including the derivation of a tractable three‐block Gibbs sampler to sample from the resulting posterior density, have been established. We show that the Markov operator corresponding to this three‐block Gibbs sampler is not Hilbert–Schmidt. We propose a simpler two‐block Gibbs sampler and show that the corresponding Markov operator is trace class (and hence Hilbert–Schmidt). Establishing the trace class property for the proposed two‐block Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain central limit theorem, which in turn enables computation of asymptotic standard errors for Markov chain‐based estimates of posterior quantities. Secondly, because the proposed Gibbs sampler uses two blocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the two‐block sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency resulting from our proposed Markov chains using simulated and real examples.  相似文献   
67.
We can use wavelet shrinkage to estimate a possibly multivariate regression function g under the general regression setup, y = g + ε. We propose an enhanced wavelet-based denoising methodology based on Bayesian adaptive multiresolution shrinkage, an effective Bayesian shrinkage rule in addition to the semi-supervised learning mechanism. The Bayesian shrinkage rule is advanced by utilizing the semi-supervised learning method in which the neighboring structure of a wavelet coefficient is adopted and an appropriate decision function is derived. According to decision function, wavelet coefficients follow one of two prespecified Bayesian rules obtained using varying related parameters. The decision of a wavelet coefficient depends not only on its magnitude, but also on the neighboring structure on which the coefficient is located. We discuss the theoretical properties of the suggested method and provide recommended parameter settings. We show that the proposed method is often superior to several existing wavelet denoising methods through extensive experimentation.  相似文献   
68.
ABSTRACT

Maasoumi (1978 Maasoumi, E. (1978). A modified Stein-like estimator for the reduced form coefficients of simultaneous equations. Econometrica 46:695703.[Crossref], [Web of Science ®] [Google Scholar]) proposed a Stein-like estimator for simultaneous equations and showed that his Stein shrinkage estimator has bounded finite sample risk, unlike the three-stage least square estimator. We revisit his proposal by investigating Stein-like shrinkage in the context of two-stage least square (2SLS) estimation of a structural parameter. Our estimator follows Maasoumi (1978 Maasoumi, E. (1978). A modified Stein-like estimator for the reduced form coefficients of simultaneous equations. Econometrica 46:695703.[Crossref], [Web of Science ®] [Google Scholar]) in taking a weighted average of the 2SLS and ordinary least square estimators, with the weight depending inversely on the Hausman (1978 Hausman, J. A. (1978). Specification tests in econometrics. Econometrica 46:12511271.[Crossref], [Web of Science ®] [Google Scholar]) statistic for exogeneity. Using a local-to-exogenous asymptotic theory, we derive the asymptotic distribution of the Stein estimator and calculate its asymptotic risk. We find that if the number of endogenous variables exceeds 2, then the shrinkage estimator has strictly smaller risk than the 2SLS estimator, extending the classic result of James and Stein (1961 James W, ., Stein, C. M. (1961). Estimation with quadratic loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability 1:361380. [Google Scholar]). In a simple simulation experiment, we show that the shrinkage estimator has substantially reduced finite sample median squared error relative to the standard 2SLS estimator.  相似文献   
69.
The presence of multicollinearity among the explanatory variables has undesirable effects on the maximum likelihood estimator (MLE). Ridge estimator (RE) is a widely used estimator in overcoming this issue. The RE enjoys the advantage that its mean squared error (MSE) is less than that of MLE. The inverse Gaussian regression (IGR) model is a well-known model in the application when the response variable positively skewed. The purpose of this paper is to derive the RE of the IGR under multicollinearity problem. In addition, the performance of this estimator is investigated under numerous methods for estimating the ridge parameter. Monte Carlo simulation results indicate that the suggested estimator performs better than the MLE estimator in terms of MSE. Furthermore, a real chemometrics dataset application is utilized and the results demonstrate the excellent performance of the suggested estimator when the multicollinearity is present in IGR model.  相似文献   
70.
ABSTRACT

Identifying homogeneous subsets of predictors in classification can be challenging in the presence of high-dimensional data with highly correlated variables. We propose a new method called cluster correlation-network support vector machine (CCNSVM) that simultaneously estimates clusters of predictors that are relevant for classification and coefficients of penalized SVM. The new CCN penalty is a function of the well-known Topological Overlap Matrix whose entries measure the strength of connectivity between predictors. CCNSVM implements an efficient algorithm that alternates between searching for predictors’ clusters and optimizing a penalized SVM loss function using Majorization–Minimization tricks and a coordinate descent algorithm. This combining of clustering and sparsity into a single procedure provides additional insights into the power of exploring dimension reduction structure in high-dimensional binary classification. Simulation studies are considered to compare the performance of our procedure to its competitors. A practical application of CCNSVM on DNA methylation data illustrates its good behaviour.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号