首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20925篇
  免费   301篇
  国内免费   2篇
管理学   3004篇
民族学   157篇
人才学   3篇
人口学   3314篇
丛书文集   70篇
理论方法论   1555篇
综合类   511篇
社会学   9791篇
统计学   2823篇
  2020年   152篇
  2019年   250篇
  2018年   1892篇
  2017年   2010篇
  2016年   1300篇
  2015年   235篇
  2014年   286篇
  2013年   1765篇
  2012年   665篇
  2011年   1460篇
  2010年   1299篇
  2009年   1041篇
  2008年   1065篇
  2007年   1275篇
  2006年   261篇
  2005年   478篇
  2004年   492篇
  2003年   396篇
  2002年   298篇
  2001年   279篇
  2000年   241篇
  1999年   215篇
  1998年   168篇
  1997年   157篇
  1996年   183篇
  1995年   144篇
  1994年   149篇
  1993年   149篇
  1992年   151篇
  1991年   148篇
  1990年   152篇
  1989年   147篇
  1988年   143篇
  1987年   148篇
  1986年   116篇
  1985年   147篇
  1984年   156篇
  1983年   141篇
  1982年   123篇
  1981年   89篇
  1980年   111篇
  1979年   113篇
  1978年   102篇
  1977年   86篇
  1976年   88篇
  1975年   99篇
  1974年   94篇
  1973年   69篇
  1972年   61篇
  1971年   53篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
In April 2013, all of the major academic publishing houses moved thousands of journal titles to an original hybrid model, under which authors of accepted papers can choose between an expensive open access (OA) track and the traditional track available only to subscribers. This paper argues that authors might now use a publication strategy as a quality signaling device. The imperfect information game between authors and readers presents several types of Perfect Bayesian Equilibria, including a separating equilibrium in which only authors of high-quality papers are driven toward the open access track. The publishing house should choose an open-access publication fee that supports the emergence of the highest return equilibrium. Journal structures will evolve over time according to the journals’ accessibility and quality profiles.  相似文献   
992.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   
993.
994.
Crime or disease surveillance commonly rely in space-time clustering methods to identify emerging patterns. The goal is to detect spatial-temporal clusters as soon as possible after its occurrence and to control the rate of false alarms. With this in mind, a spatio-temporal multiple cluster detection method was developed as an extension of a previous proposal based on a spatial version of the Shiryaev–Roberts statistic. Besides the capability of multiple cluster detection, the method have less input parameter than the previous proposal making its use more intuitive to practitioners. To evaluate the new methodology a simulation study is performed in several scenarios and enlighten many advantages of the proposed method. Finally, we present a case study to a crime data-set in Belo Horizonte, Brazil.  相似文献   
995.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   
996.
\(\alpha \)-Stable distributions are a family of probability distributions found to be suitable to model many complex processes and phenomena in several research fields, such as medicine, physics, finance and networking, among others. However, the lack of closed expressions makes their evaluation analytically intractable, and alternative approaches are computationally expensive. Existing numerical programs are not fast enough for certain applications and do not make use of the parallel power of general purpose graphic processing units. In this paper, we develop novel parallel algorithms for the probability density function and cumulative distribution function—including a parallel Gauss–Kronrod quadrature—, quantile function, random number generator and maximum likelihood estimation of \(\alpha \)-stable distributions using OpenCL, achieving significant speedups and precision in all cases. Thanks to the use of OpenCL, we also evaluate the results of our library with different GPU architectures.  相似文献   
997.
This study extends the affine Nelson–Siegel model by introducing the time-varying volatility component in the observation equation of yield curve, modeled as a standard EGARCH process. The model is illustrated in state-space framework and empirically compared to the standard affine and dynamic Nelson–Siegel model in terms of in-sample fit and out-of-sample forecast accuracy. The affine based extended model that accounts for time-varying volatility outpaces the other models for fitting the yield curve and produces relatively more accurate 6- and 12-month ahead forecasts, while the standard affine model comes with more precise forecasts for the very short forecast horizons. The study concludes that the standard and affine Nelson–Siegel models have higher forecasting capability than their counterpart EGARCH based models for the short forecast horizons, i.e., 1 month. The EGARCH based extended models have excellent performance for the medium and longer forecast horizons.  相似文献   
998.
Optimum experimental design theory has recently been extended for parameter estimation in copula models. The use of these models allows one to gain in flexibility by considering the model parameter set split into marginal and dependence parameters. However, this separation also leads to the natural issue of estimating only a subset of all model parameters. In this work, we treat this problem with the application of the \(D_s\)-optimality to copula models. First, we provide an extension of the corresponding equivalence theory. Then, we analyze a wide range of flexible copula models to highlight the usefulness of \(D_s\)-optimality in many possible scenarios. Finally, we discuss how the usage of the introduced design criterion also relates to the more general issue of copula selection and optimal design for model discrimination.  相似文献   
999.
This paper concerns the specification of multivariate prediction regions which may be useful in time series applications whenever we aim at considering not just one single forecast but a group of consecutive forecasts. We review a general result on improved multivariate prediction and we use it in order to calculate conditional prediction intervals for Markov process models so that the associated coverage probability turns out to be close to the target value. This improved solution is asymptotically superior to the estimative one, which is simpler but it may lead to unreliable predictive conclusions. An application to general autoregressive models is presented, focusing in particular on AR and ARCH models.  相似文献   
1000.
We consider kernel methods to construct nonparametric estimators of a regression function based on incomplete data. To tackle the presence of incomplete covariates, we employ Horvitz–Thompson-type inverse weighting techniques, where the weights are the selection probabilities. The unknown selection probabilities are themselves estimated using (1) kernel regression, when the functional form of these probabilities are completely unknown, and (2) the least-squares method, when the selection probabilities belong to a known class of candidate functions. To assess the overall performance of the proposed estimators, we establish exponential upper bounds on the \(L_p\) norms, \(1\le p<\infty \), of our estimators; these bounds immediately yield various strong convergence results. We also apply our results to deal with the important problem of statistical classification with partially observed covariates.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号