首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2165篇
  免费   64篇
  国内免费   38篇
管理学   442篇
民族学   1篇
人口学   1篇
丛书文集   32篇
理论方法论   4篇
综合类   686篇
社会学   8篇
统计学   1093篇
  2024年   3篇
  2023年   10篇
  2022年   28篇
  2021年   21篇
  2020年   34篇
  2019年   63篇
  2018年   68篇
  2017年   125篇
  2016年   74篇
  2015年   63篇
  2014年   71篇
  2013年   307篇
  2012年   144篇
  2011年   97篇
  2010年   85篇
  2009年   77篇
  2008年   112篇
  2007年   113篇
  2006年   106篇
  2005年   99篇
  2004年   97篇
  2003年   64篇
  2002年   50篇
  2001年   49篇
  2000年   61篇
  1999年   51篇
  1998年   37篇
  1997年   27篇
  1996年   30篇
  1995年   21篇
  1994年   17篇
  1993年   8篇
  1992年   20篇
  1991年   6篇
  1990年   5篇
  1989年   10篇
  1988年   6篇
  1987年   3篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
排序方式: 共有2267条查询结果,搜索用时 18 毫秒
61.
The traditional Cobb–Douglas production function uses the compact mathematical form to describe the relationship between the production results and production factors in the production technology process. However, in macro-economic production, multi-structured production exists universally. In order to better demonstrate such input–output relation, a composite production function model is proposed in this article. In aspect of model parameter estimation, artificial fish swarm algorithm is applied. The algorithm has satisfactory performance in overcoming local extreme value and acquiring global extreme value. Moreover, realization of the algorithm does not need the gradient value of the objective function. For this reason, it is adaptive to searching space. Through the improved artificial fish swarm algorithm, convergence rate and precision are both considerably improved. In aspect of model application, the composite production function model is mainly used to calculate economic growth factor contribution rate. In this article, a relatively more accurate calculating method is proposed. In the end, empirical analysis on economic growth contribution rate of China is implemented.  相似文献   
62.
利用同伦正则化算法探讨了二维对流弥散方程的依赖空间变量的弥散系数反演问题.讨论了初始迭代值、数值微分步长、以及收敛精度对算法实现的影响.数值模拟表明,同伦正则化算法对于此类参数反演问题是一种有效的方法.  相似文献   
63.
Nonparametric models with jump points have been considered by many researchers. However, most existing methods based on least squares or likelihood are sensitive when there are outliers or the error distribution is heavy tailed. In this article, a local piecewise-modal method is proposed to estimate the regression function with jump points in nonparametric models, and a piecewise-modal EM algorithm is introduced to estimate the proposed estimator. Under some regular conditions, the large-sample theory is established for the proposed estimators. Several simulations are presented to evaluate the performances of the proposed method, which shows that the proposed estimator is more efficient than the local piecewise-polynomial regression estimator in the presence of outliers or heavy tail error distribution. What is more, the proposed procedure is asymptotically equivalent to the local piecewise-polynomial regression estimator under the assumption that the error distribution is a Gaussian distribution. The proposed method is further illustrated via the sea-level pressures.  相似文献   
64.
The standard location and scale unrestricted (or unified) skew-normal (SUN) family studied by Arellano-Valle and Genton [On fundamental skew distributions. J Multivar Anal. 2005;96:93–116] and Arellano-Valle and Azzalini [On the unification of families of skew-normal distributions. Scand J Stat. 2006;33:561–574], allows the modelling of data which is symmetrically or asymmetrically distributed. The family has a number of advantages suitable for the analysis of stochastic processes such as Auto-Regressive Moving-Average (ARMA) models, including being closed under linear combinations, being able to satisfy the consistency condition of Kolmogorov’s theorem and providing the guarantee of the existence of such a SUN stochastic process. The family is able to be represented in a hierarchical form which can be used for the ease of simulation. In addition, it facilitates an EM-type algorithm to estimate the model parameters. The performances and suitability of the proposed model are demonstrated on simulations and using two real data sets in applications.  相似文献   
65.
数据时代面临的个人信息受到侵害、个人隐私泄露、算法歧视等一系列重大问题,应将以保护信息主体权利为目的的被遗忘权确立下来。欧美被遗忘权在隐私与自由之间存在的冲突背后所体现的是被遗忘权对于各国需求的重要性,我国现有法律制度中规定的删除权与欧盟所确立的被遗忘权是不能等同的。现行被遗忘权面临着调整对象的设定缺陷、权利内容的设定偏差、责任主体的设定不足三方面的发展困境。学者热议与实践需求在一定程度上表明我国应当引入被遗忘权,并应从调整对象的优化构建、权利内容的改造构建、责任承担的明确构建搭建起中国的路径选择,以解决网络信息社会的发展带给我们的新挑战。  相似文献   
66.
在分析铁路货运量预测方法的基础上,针对标准BP神经网络的不足,提出改进的BP神经网络预测模型。首先,利用动态陡度因子来改变激励函数的陡峭程度,以此来得到更好的激励函数响应特征以及更好的非线性表达能力;其次,利用附加动量因子,通过对以前经验的积累,既降低了神经网络对误差曲面的局部细节敏感特性,又较好的遏制了神经网络易于限于局部最小的缺陷;最后,采取改变学习率的方法,给定一个较大的学习率初始值,在学习的过程中学习率不断减小,网络最终趋于稳定。改进BP算法既可以得到更优的解,还能够缩短训练时间。利用全国铁路货运  相似文献   
67.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   
68.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
69.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   
70.
In this paper, locally D-optimal saturated designs for a logistic model with one and two continuous input variables have been constructed by modifying the famous Fedorov exchange algorithm. A saturated design not only ensures the minimum number of runs in the design but also simplifies the row exchange computation. The basic idea is to exchange a design point with a point from the design space. The algorithm performs the best row exchange between design points and points form a candidate set representing the design space. Naturally, the resultant designs depend on the candidate set. For gain in precision, intuitively a candidate set with a larger number of points and the low discrepancy is desirable, but it increases the computational cost. Apart from the modification in row exchange computation, we propose implementing the algorithm in two stages. Initially, construct a design with a candidate set of affordable size and then later generate a new candidate set around the points of design searched in the former stage. In order to validate the optimality of constructed designs, we have used the general equivalence theorem. Algorithms for the construction of optimal designs have been implemented by developing suitable codes in R.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号