首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   652篇
  免费   20篇
  国内免费   14篇
管理学   30篇
民族学   2篇
人口学   3篇
丛书文集   65篇
理论方法论   16篇
综合类   385篇
社会学   13篇
统计学   172篇
  2024年   2篇
  2023年   4篇
  2022年   13篇
  2021年   11篇
  2020年   20篇
  2019年   25篇
  2018年   18篇
  2017年   30篇
  2016年   15篇
  2015年   22篇
  2014年   41篇
  2013年   54篇
  2012年   40篇
  2011年   49篇
  2010年   35篇
  2009年   38篇
  2008年   32篇
  2007年   42篇
  2006年   40篇
  2005年   33篇
  2004年   18篇
  2003年   18篇
  2002年   11篇
  2001年   23篇
  2000年   14篇
  1999年   6篇
  1998年   3篇
  1997年   9篇
  1996年   4篇
  1995年   5篇
  1994年   3篇
  1993年   4篇
  1992年   1篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
排序方式: 共有686条查询结果,搜索用时 31 毫秒
671.
A relatively newer computational technique adopted by statisticians is known as independent component analysis (ICA) which is used to analyze complex multidimensional data with the objective to separate it into components that are independent to each other. Quite often the main interest for conducting ICA is to identify a small number of significant independent components (ICs) to replace the original complex dimensions with. For this, determining the order of identified ICs is a pre-requisite. The area is not unaddressed but it does deserve a careful revisiting. This is the subject matter of the paper which introduces a new method to order ICs. The proposed method is based upon regression approach. It compares the magnitude of the mixing coefficients and regression coefficients of the regression of the original series on ICs. Their compatibility determines the order.  相似文献   
672.
Principal fitted component (PFC) models are a class of likelihood-based inverse regression methods that yield a so-called sufficient reduction of the random p-vector of predictors X given the response Y. Assuming that a large number of the predictors has no information about Y, we aimed to obtain an estimate of the sufficient reduction that ‘purges’ these irrelevant predictors, and thus, select the most useful ones. We devised a procedure using observed significance values from the univariate fittings to yield a sparse PFC, a purged estimate of the sufficient reduction. The performance of the method is compared to that of penalized forward linear regression models for variable selection in high-dimensional settings.  相似文献   
673.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   
674.
在区域确定的前提下,将各高等教育资源数据进行整理,建立Multinomial Logistic模型,分析各结构的相对发生比率以及各结构的最佳分布。分析该回归分析的多维发生比率,由此确定各种离散等级状态之间的调整方向及调整程度。对黑龙江省的各高等教育区域进行实证分析,结果表明,黑龙江省高等教育资源结构以教学型、教学研究型、研究教学型、研究型的比例来判断,高等教育效用有待于进一步挖掘。多维发生比率以及自变量对多维发生比率的变化影响这两个参数对区域高等教育资源结构优化起着关键作用。  相似文献   
675.
Non‐parametric generalized likelihood ratio test is a popular method of model checking for regressions. However, there are two issues that may be the barriers for its powerfulness: existing bias term and curse of dimensionality. The purpose of this paper is thus twofold: a bias reduction is suggested and a dimension reduction‐based adaptive‐to‐model enhancement is recommended to promote the power performance. The proposed test statistic still possesses the Wilks phenomenon and behaves like a test with only one covariate. Thus, it converges to its limit at a much faster rate and is much more sensitive to alternative models than the classical non‐parametric generalized likelihood ratio test. As a by‐product, we also prove that the bias‐corrected test is more efficient than the one without bias reduction in the sense that its asymptotic variance is smaller. Simulation studies and a real data analysis are conducted to evaluate of proposed tests.  相似文献   
676.
Ultra-high dimensional data arise in many fields of modern science, such as medical science, economics, genomics and imaging processing, and pose unprecedented challenge for statistical analysis. With such rapid-growth size of scientific data in various disciplines, feature screening becomes a primary step to reduce the high dimensionality to a moderate scale that can be handled by the existing penalized methods. In this paper, we introduce a simple and robust feature screening method without any model assumption to tackle high dimensional censored data. The proposed method is model-free and hence applicable to a general class of survival models. The sure screening and ranking consistency properties without any finite moment condition of the predictors and the response are established. The computation of the proposed method is rather straightforward. Finite sample performance of the newly proposed method is examined via extensive simulation studies. An application is illustrated with the gene association study of the mantle cell lymphoma.  相似文献   
677.
This paper is concerned with testing the equality of two high‐dimensional spatial sign covariance matrices with applications to testing the proportionality of two high‐dimensional covariance matrices. It is interesting that these two testing problems are completely equivalent for the class of elliptically symmetric distributions. This paper develops a new test for testing the equality of two high‐dimensional spatial sign covariance matrices based on the Frobenius norm of the difference between two spatial sign covariance matrices. The asymptotic normality of the proposed testing statistic is derived under the null and alternative hypotheses when the dimension and sample sizes both tend to infinity. Moreover, the asymptotic power function is also presented. Simulation studies show that the proposed test performs very well in a wide range of settings and can be allowed for the case of large dimensions and small sample sizes.  相似文献   
678.
In this article, we propose a new method for sufficient dimension reduction when both response and predictor are vectors. The new method, using distance covariance, keeps the model-free advantage, and can fully recover the central subspace even when many predictors are discrete. We then extend this method to the dual central subspace, including a special case of canonical correlation analysis. We illustrated estimators through extensive simulations and real datasets, and compared to some existing methods, showing that our estimators are competitive and robust.  相似文献   
679.
In this paper, an unstructured principal fitted response reduction approach is proposed. The new approach is mainly different from two existing model-based approaches, because a required condition is assumed in a covariance matrix of the responses instead of that of a random error. Also, it is invariant under one of popular ways of standardizing responses with its sample covariance equal to the identity matrix. According to numerical studies, the proposed approach yields more robust estimation than the two existing methods, in the sense that its asymptotic performances are not severely sensitive to various situations. So, it can be recommended that the proposed method should be used as a default model-based method.  相似文献   
680.
We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset. To this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. In this context, we exhibit a closed-form expression of the marginal likelihood which allows to infer an optimal number of components. We also propose a heuristic based on the expected shape of the marginal likelihood curve in order to choose the hyperparameters. In nonasymptotic frameworks, we show on simulated data that this exact dimensionality selection approach is competitive with both Bayesian and frequentist state-of-the-art methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号