首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1537篇
  免费   25篇
  国内免费   1篇
管理学   98篇
民族学   2篇
人口学   14篇
丛书文集   43篇
理论方法论   10篇
综合类   547篇
社会学   23篇
统计学   826篇
  2023年   4篇
  2022年   4篇
  2021年   9篇
  2020年   19篇
  2019年   30篇
  2018年   38篇
  2017年   78篇
  2016年   29篇
  2015年   33篇
  2014年   46篇
  2013年   384篇
  2012年   83篇
  2011年   46篇
  2010年   36篇
  2009年   44篇
  2008年   59篇
  2007年   40篇
  2006年   36篇
  2005年   49篇
  2004年   37篇
  2003年   32篇
  2002年   19篇
  2001年   25篇
  2000年   39篇
  1999年   47篇
  1998年   33篇
  1997年   31篇
  1996年   45篇
  1995年   35篇
  1994年   31篇
  1993年   12篇
  1992年   22篇
  1991年   5篇
  1990年   19篇
  1989年   8篇
  1988年   16篇
  1987年   2篇
  1985年   7篇
  1984年   8篇
  1983年   5篇
  1982年   5篇
  1981年   5篇
  1980年   1篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1975年   3篇
排序方式: 共有1563条查询结果,搜索用时 984 毫秒
901.
地质多元统计分析是通过对地质观测数据的分析、推断,研究多个地质变量之间的相互依赖关系和内在统计规律性,它涉及到很复杂的数学计算。本文通过MATLAB软件来实现地质多元统计分析中的因子分析计算和趋势面等值线的绘制,并初步探讨了利用MATLAB进行地质多元统计分析的计算和绘图方法。  相似文献   
902.
阐述了马尔可夫决策理论中的最基本分析方法———系统状态转移概率矩阵决策法,针对企业集群的特点,运用系统状态转移概率矩阵决策法对企业集群市场的状态进行了分析,并对各种状态的未来分布趋势和分布状况进行预测,分别计算出各种类型企业的未来市场资源份额,给政府进行市场的调节提供了理论上的依据和方法上的指导。  相似文献   
903.
分析了DSM(Design Structure Matrix)优化方法的研究现状及不足,接着以耦合强度为基础,提出了数字化的设计结构矩阵。以此矩阵为基础,以遗传算法为优化工具,经过目标函数的设立,编码、交叉算子、变异算子等的设计,提出了一个新的DSM优化方法。以飞机某一部件的设计过程为例,实现了设计的时间、费用及过程优化,并与现有算法作了对比分析,验证了新优化方法在寻优效率、目标函数、及最优化结果等方面的优越性能。  相似文献   
904.
905.
It is well known that it is difficult to obtain an accurate optimal design for a mixture experimental design with complex constraints. In this article, we construct a random search algorithm which can be used to find the optimal design for mixture model with complex constraints. First, we generate an initial set by the Monte-Carlo method, and then run the random search algorithm to get the optimal set of points. After that, we explain the effectiveness of this method by using two examples.  相似文献   
906.
Graphical analysis of complex brain networks is a fundamental area of modern neuroscience. Functional connectivity is important since many neurological and psychiatric disorders, including schizophrenia, are described as ‘dys-connectivity’ syndromes. Using electroencephalogram time series collected on each of a group of 15 individuals with a common medical diagnosis of positive syndrome schizophrenia we seek to build a single, representative, brain functional connectivity group graph. Disparity/distance measures between spectral matrices are identified and used to define the normalized graph Laplacian enabling clustering of the spectral matrices for detecting ‘outlying’ individuals. Two such individuals are identified. For each remaining individual, we derive a test for each edge in the connectivity graph based on average estimated partial coherence over frequencies, and associated p-values are found. For each edge these are used in a multiple hypothesis test across individuals and the proportion rejecting the hypothesis of no edge is used to construct a connectivity group graph. This study provides a framework for integrating results on multiple individuals into a single overall connectivity structure.  相似文献   
907.
Sampling the correlation matrix (R) plays an important role in statistical inference for correlated models. There are two main constraints on a correlation matrix: positive definiteness and fixed diagonal elements. These constraints make sampling R difficult. In this paper, an efficient generalized parameter expanded re-parametrization and Metropolis-Hastings (GPX-RPMH) algorithm for sampling a correlation matrix is proposed. Drawing all components of R simultaneously from its full conditional distribution is realized by first drawing a covariance matrix from the derived parameter expanded candidate density (PXCD), and then translating it back to a correlation matrix and accepting it according to a Metropolis-Hastings (M-H) acceptance rate. The mixing rate in the M-H step can be adjusted through a class of tuning parameters embedded in the generalized candidate prior (GCP), which is chosen for R to derive the PXCD. This algorithm is illustrated using multivariate regression (MVR) models and a simulation study shows that the performance of the GPX-RPMH algorithm is more efficient than that of other methods.  相似文献   
908.
Mukherjee and Maiti [Q-procedure for solving likelihood equations in the analysis of covariance structures, Comput. Statist. Quart. 2 (1988), pp. 105–128] proposed an iterative scheme to derive the maximum likelihood estimates of the parameters involved in the population covariance matrix when it is linearly structured. The present investigation provides a Jacobi-type of iterative scheme, MSIII, when the underlying correlation matrix is linearly structured. Such scheme is shown to be quite competent and efficient compared to the prevalent Fisher-scoring (FS) and the Newton–Raphson iterative scheme (NR). An illustrative example is provided for a numerical comparison of the iterates of MSIII, FS and NR choosing the Toeplitz matrix as the population correlation matrix. Numerical behaviour of such schemes is studied in the context of ‘bad’ initial try-out vectors. Additionally a simulation experiment is performed to judge the superiority of MSIII over FS.  相似文献   
909.
Neglecting heteroscedasticity of error terms may imply the wrong identification of a regression model (see appendix). Employment of (heteroscedasticity resistent) White's estimator of covariance matrix of estimates of regression coefficients may lead to the correct decision about the significance of individual explanatory variables under heteroscedasticity. However, White's estimator of covariance matrix was established for least squares (LS)-regression analysis (in the case when error terms are normally distributed, LS- and maximum likelihood (ML)-analysis coincide and hence then White's estimate of covariance matrix is available for ML-regression analysis, tool). To establish White's-type estimate for another estimator of regression coefficients requires Bahadur representation of the estimator in question, under heteroscedasticity of error terms. The derivation of Bahadur representation for other (robust) estimators requires some tools. As the key too proved to be a tight approximation of the empirical distribution function (d.f.) of residuals by the theoretical d.f. of the error terms of the regression model. We need the approximation to be uniform in the argument of d.f. as well as in regression coefficients. The present paper offers this approximation for the situation when the error terms are heteroscedastic.  相似文献   
910.
In statistical practice, systematic sampling (SYS) is used in many modifications due to its simple handling. In addition, SYS may provide efficiency gains if it is well adjusted to the structure of the population under study. However, if SYS is based on an inappropriate picture of the population a high decrease of efficiency, i.e. a high increase in variance may result by changing from simple random sampling to SYS. In the context of two-stage designs SYS so far seems often in use for subsampling within the primary units. As an alternative to this practice, we propose to randomize the order of the primary units, then to select systematically a number of primary units and, thereafter, to draw secondary units by simple random sampling without replacement within the primary units selected. This procedure is more efficient than simple random sampling with replacement from the whole population of all secondary units, i.e. the variance of an adequate estimator for a total is never increased by changing from simple random sampling to randomized SYS whatever be the values associated by a characteristic with the secondary units, while there are values for which the variance decreases for the change mentioned. This result should hold generally, even if our proof, so far, is not complete for general sample sizes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号