首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   29403篇
  免费   761篇
  国内免费   2篇
管理学   4043篇
民族学   122篇
人才学   4篇
人口学   2803篇
丛书文集   132篇
教育普及   2篇
理论方法论   2647篇
现状及发展   1篇
综合类   336篇
社会学   14147篇
统计学   5929篇
  2021年   170篇
  2020年   430篇
  2019年   564篇
  2018年   722篇
  2017年   999篇
  2016年   717篇
  2015年   561篇
  2014年   708篇
  2013年   4960篇
  2012年   986篇
  2011年   886篇
  2010年   701篇
  2009年   564篇
  2008年   663篇
  2007年   692篇
  2006年   682篇
  2005年   653篇
  2004年   581篇
  2003年   558篇
  2002年   601篇
  2001年   766篇
  2000年   771篇
  1999年   674篇
  1998年   511篇
  1997年   450篇
  1996年   512篇
  1995年   462篇
  1994年   447篇
  1993年   436篇
  1992年   518篇
  1991年   492篇
  1990年   448篇
  1989年   418篇
  1988年   451篇
  1987年   399篇
  1986年   360篇
  1985年   430篇
  1984年   427篇
  1983年   376篇
  1982年   315篇
  1981年   269篇
  1980年   245篇
  1979年   294篇
  1978年   259篇
  1977年   230篇
  1976年   188篇
  1975年   207篇
  1974年   182篇
  1973年   166篇
  1972年   133篇
排序方式: 共有10000条查询结果,搜索用时 281 毫秒
971.
通过对2014年中国SCI和SSCI学术论文外流情况的统计,当前学术论文外流的严重情况被全面和具体地呈现出来。通过系统梳理学术论文外流的相关研究,一方面从国家对论文外流缺乏宏观引导、科研考核机制刺激优秀论文外流、我国缺乏具有国际影响力的期刊以及国内期刊运作机制与国际化水平尚待提升4个方面揭示学术论文外流的具体原因;另一方面从宏观层面提出加强国家引导、集中优势资源打造国际顶级期刊、调整科研绩效考评政策、深化期刊体制改革、设立国内科技论文评奖机制5个方面的应对措施。  相似文献   
972.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
973.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
974.
975.
In this article, we use cumulative residual Kullback-Leibler information (CRKL) and cumulative Kullback-Leibler information (CKL) to construct two goodness-of-fit test statistics for testing exponentiality with progressively Type-II censored data. The power of the proposed tests are compared with the power of goodness-of-fit test for exponentiality introduced by Balakrishnan et al. (2007 Balakrishnan, N., Habibi Rad, A., Arghami, N.R. (2007). Testing exponentiality based on Kullback-Leibler information with progressively type-II censored data. IEEE Transactions on Reliability 56(2):301307.[Crossref], [Web of Science ®] [Google Scholar]). We show that when the hazard function of the alternative is monotone decreasing, the test based on CRKL has higher power and when the hazard function of the alternative is non-monotone, the test based on CKL has higher power. But, when it is monotone increasing the power difference between test based on CKL and their proposed test is not so remarkable. The use of the proposed tests is shown in an illustrative example.  相似文献   
976.
977.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   
978.
This paper introduces a finite mixture of canonical fundamental skew \(t\) (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed (in: Lee and McLachlan, arXiv:1401.8182 [statME], 2014b). The family of CFUST distributions includes the restricted multivariate skew \(t\) and unrestricted multivariate skew \(t\) distributions as special cases. In recent years, a few versions of the multivariate skew \(t\) (MST) mixture model have been put forward, together with various EM-type algorithms for parameter estimation. These formulations adopted either a restricted or unrestricted characterization for their MST densities. In this paper, we examine a natural generalization of these developments, employing the CFUST distribution as the parametric family for the component distributions, and point out that the restricted and unrestricted characterizations can be unified under this general formulation. We show that an exact implementation of the EM algorithm can be achieved for the CFUST distribution and mixtures of this distribution, and present some new analytical results for a conditional expectation involved in the E-step.  相似文献   
979.
In nonregular problems where the conventional \(n\) out of \(n\) bootstrap is inconsistent, the \(m\) out of \(n\) bootstrap provides a useful remedy to restore consistency. Conventionally, optimal choice of the bootstrap sample size \(m\) is taken to be the minimiser of a frequentist error measure, estimation of which has posed a major difficulty hindering practical application of the \(m\) out of \(n\) bootstrap method. Relatively little attention has been paid to a stronger, stochastic, version of the optimal bootstrap sample size, defined as the minimiser of an error measure calculated directly from the observed sample. Motivated by this stronger notion of optimality, we develop procedures for calculating the stochastically optimal value of \(m\). Our procedures are shown to work under special forms of Edgeworth-type expansions which are typically satisfied by statistics of the shrinkage type. Theoretical and empirical properties of our methods are illustrated with three examples, namely the James–Stein estimator, the ridge regression estimator and the post-model-selection regression estimator.  相似文献   
980.
The accelerated failure time (AFT) models have proved useful in many contexts, though heavy censoring (as for example in cancer survival) and high dimensionality (as for example in microarray data) cause difficulties for model fitting and model selection. We propose new approaches to variable selection for censored data, based on AFT models optimized using regularized weighted least squares. The regularized technique uses a mixture of \(\ell _1\) and \(\ell _2\) norm penalties under two proposed elastic net type approaches. One is the adaptive elastic net and the other is weighted elastic net. The approaches extend the original approaches proposed by Ghosh (Adaptive elastic net: an improvement of elastic net to achieve oracle properties, Technical Reports 2007) and Hong and Zhang (Math Model Nat Phenom 5(3):115–133 2010), respectively. We also extend the two proposed approaches by adding censoring observations as constraints into their model optimization frameworks. The approaches are evaluated on microarray and by simulation. We compare the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号