首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18116篇
  免费   191篇
  国内免费   4篇
管理学   2703篇
劳动科学   1篇
民族学   121篇
人口学   3034篇
丛书文集   70篇
理论方法论   1160篇
综合类   526篇
社会学   8239篇
统计学   2457篇
  2020年   92篇
  2019年   119篇
  2018年   1765篇
  2017年   1800篇
  2016年   1202篇
  2015年   150篇
  2014年   204篇
  2013年   1347篇
  2012年   497篇
  2011年   1328篇
  2010年   1174篇
  2009年   939篇
  2008年   997篇
  2007年   1159篇
  2006年   214篇
  2005年   419篇
  2004年   397篇
  2003年   338篇
  2002年   233篇
  2001年   164篇
  2000年   188篇
  1999年   153篇
  1998年   138篇
  1997年   122篇
  1996年   156篇
  1995年   133篇
  1994年   139篇
  1993年   131篇
  1992年   142篇
  1991年   155篇
  1990年   172篇
  1989年   143篇
  1988年   165篇
  1987年   136篇
  1986年   132篇
  1985年   133篇
  1984年   143篇
  1983年   132篇
  1982年   97篇
  1981年   80篇
  1980年   87篇
  1979年   113篇
  1978年   89篇
  1977年   69篇
  1976年   81篇
  1975年   52篇
  1974年   75篇
  1973年   52篇
  1971年   43篇
  1970年   44篇
排序方式: 共有10000条查询结果,搜索用时 296 毫秒
901.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   
902.
903.
Crime or disease surveillance commonly rely in space-time clustering methods to identify emerging patterns. The goal is to detect spatial-temporal clusters as soon as possible after its occurrence and to control the rate of false alarms. With this in mind, a spatio-temporal multiple cluster detection method was developed as an extension of a previous proposal based on a spatial version of the Shiryaev–Roberts statistic. Besides the capability of multiple cluster detection, the method have less input parameter than the previous proposal making its use more intuitive to practitioners. To evaluate the new methodology a simulation study is performed in several scenarios and enlighten many advantages of the proposed method. Finally, we present a case study to a crime data-set in Belo Horizonte, Brazil.  相似文献   
904.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   
905.
\(\alpha \)-Stable distributions are a family of probability distributions found to be suitable to model many complex processes and phenomena in several research fields, such as medicine, physics, finance and networking, among others. However, the lack of closed expressions makes their evaluation analytically intractable, and alternative approaches are computationally expensive. Existing numerical programs are not fast enough for certain applications and do not make use of the parallel power of general purpose graphic processing units. In this paper, we develop novel parallel algorithms for the probability density function and cumulative distribution function—including a parallel Gauss–Kronrod quadrature—, quantile function, random number generator and maximum likelihood estimation of \(\alpha \)-stable distributions using OpenCL, achieving significant speedups and precision in all cases. Thanks to the use of OpenCL, we also evaluate the results of our library with different GPU architectures.  相似文献   
906.
This study extends the affine Nelson–Siegel model by introducing the time-varying volatility component in the observation equation of yield curve, modeled as a standard EGARCH process. The model is illustrated in state-space framework and empirically compared to the standard affine and dynamic Nelson–Siegel model in terms of in-sample fit and out-of-sample forecast accuracy. The affine based extended model that accounts for time-varying volatility outpaces the other models for fitting the yield curve and produces relatively more accurate 6- and 12-month ahead forecasts, while the standard affine model comes with more precise forecasts for the very short forecast horizons. The study concludes that the standard and affine Nelson–Siegel models have higher forecasting capability than their counterpart EGARCH based models for the short forecast horizons, i.e., 1 month. The EGARCH based extended models have excellent performance for the medium and longer forecast horizons.  相似文献   
907.
Optimum experimental design theory has recently been extended for parameter estimation in copula models. The use of these models allows one to gain in flexibility by considering the model parameter set split into marginal and dependence parameters. However, this separation also leads to the natural issue of estimating only a subset of all model parameters. In this work, we treat this problem with the application of the \(D_s\)-optimality to copula models. First, we provide an extension of the corresponding equivalence theory. Then, we analyze a wide range of flexible copula models to highlight the usefulness of \(D_s\)-optimality in many possible scenarios. Finally, we discuss how the usage of the introduced design criterion also relates to the more general issue of copula selection and optimal design for model discrimination.  相似文献   
908.
This paper concerns the specification of multivariate prediction regions which may be useful in time series applications whenever we aim at considering not just one single forecast but a group of consecutive forecasts. We review a general result on improved multivariate prediction and we use it in order to calculate conditional prediction intervals for Markov process models so that the associated coverage probability turns out to be close to the target value. This improved solution is asymptotically superior to the estimative one, which is simpler but it may lead to unreliable predictive conclusions. An application to general autoregressive models is presented, focusing in particular on AR and ARCH models.  相似文献   
909.
We consider kernel methods to construct nonparametric estimators of a regression function based on incomplete data. To tackle the presence of incomplete covariates, we employ Horvitz–Thompson-type inverse weighting techniques, where the weights are the selection probabilities. The unknown selection probabilities are themselves estimated using (1) kernel regression, when the functional form of these probabilities are completely unknown, and (2) the least-squares method, when the selection probabilities belong to a known class of candidate functions. To assess the overall performance of the proposed estimators, we establish exponential upper bounds on the \(L_p\) norms, \(1\le p<\infty \), of our estimators; these bounds immediately yield various strong convergence results. We also apply our results to deal with the important problem of statistical classification with partially observed covariates.  相似文献   
910.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号